Test Report: KVM_Linux_crio 19337

                    
                      a9f4e4a9a8ef6f7d1064a3bd8285d9113f3d3767:2024-07-29:35545
                    
                

Test fail (30/320)

Order failed test Duration
43 TestAddons/parallel/Ingress 155.44
45 TestAddons/parallel/MetricsServer 359.31
54 TestAddons/StoppedEnableDisable 154.46
173 TestMultiControlPlane/serial/StopSecondaryNode 141.95
175 TestMultiControlPlane/serial/RestartSecondaryNode 61.72
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 382.28
180 TestMultiControlPlane/serial/StopCluster 141.99
240 TestMultiNode/serial/RestartKeepsNodes 335.52
242 TestMultiNode/serial/StopMultiNode 141.43
249 TestPreload 176.4
257 TestKubernetesUpgrade 389.46
292 TestPause/serial/SecondStartNoReconfiguration 88.33
323 TestStartStop/group/old-k8s-version/serial/FirstStart 295.56
348 TestStartStop/group/no-preload/serial/Stop 139.17
351 TestStartStop/group/embed-certs/serial/Stop 139.11
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.18
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 110.26
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/old-k8s-version/serial/SecondStart 699.32
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.42
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.3
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.38
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.57
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 428.74
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 465.14
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 336.65
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 149.85
x
+
TestAddons/parallel/Ingress (155.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-342031 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-342031 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-342031 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b2af2b59-c59f-4341-a2d3-88a65f799b1c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b2af2b59-c59f-4341-a2d3-88a65f799b1c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004291507s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-342031 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.356107909s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-342031 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.224
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-342031 addons disable ingress-dns --alsologtostderr -v=1: (1.431097337s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-342031 addons disable ingress --alsologtostderr -v=1: (7.692497774s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-342031 -n addons-342031
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-342031 logs -n 25: (1.214022889s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-120370                                                                     | download-only-120370 | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC | 29 Jul 24 10:22 UTC |
	| delete  | -p download-only-876146                                                                     | download-only-876146 | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC | 29 Jul 24 10:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-960068 | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC |                     |
	|         | binary-mirror-960068                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33367                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-960068                                                                     | binary-mirror-960068 | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC | 29 Jul 24 10:22 UTC |
	| addons  | enable dashboard -p                                                                         | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC |                     |
	|         | addons-342031                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC |                     |
	|         | addons-342031                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-342031 --wait=true                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC | 29 Jul 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | -p addons-342031                                                                            |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-342031 ssh cat                                                                       | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | /opt/local-path-provisioner/pvc-48e69630-5ff6-45b0-be49-8c195291cc40_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | -p addons-342031                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | addons-342031                                                                               |                      |         |         |                     |                     |
	| ip      | addons-342031 ip                                                                            | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | addons-342031                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-342031 ssh curl -s                                                                   | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-342031 addons                                                                        | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:26 UTC | 29 Jul 24 10:26 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-342031 addons                                                                        | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:26 UTC | 29 Jul 24 10:26 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-342031 ip                                                                            | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:28 UTC | 29 Jul 24 10:28 UTC |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:28 UTC | 29 Jul 24 10:28 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:28 UTC | 29 Jul 24 10:28 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:22:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:22:26.509615   12698 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:22:26.509721   12698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:22:26.509731   12698 out.go:304] Setting ErrFile to fd 2...
	I0729 10:22:26.509735   12698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:22:26.509914   12698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:22:26.510506   12698 out.go:298] Setting JSON to false
	I0729 10:22:26.511389   12698 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":292,"bootTime":1722248254,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:22:26.511444   12698 start.go:139] virtualization: kvm guest
	I0729 10:22:26.513397   12698 out.go:177] * [addons-342031] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:22:26.514719   12698 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:22:26.514718   12698 notify.go:220] Checking for updates...
	I0729 10:22:26.516186   12698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:22:26.517423   12698 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:22:26.518519   12698 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:22:26.519638   12698 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 10:22:26.520663   12698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:22:26.521870   12698 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:22:26.553944   12698 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 10:22:26.555240   12698 start.go:297] selected driver: kvm2
	I0729 10:22:26.555261   12698 start.go:901] validating driver "kvm2" against <nil>
	I0729 10:22:26.555273   12698 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:22:26.555930   12698 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:22:26.555994   12698 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:22:26.569912   12698 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:22:26.569951   12698 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:22:26.570162   12698 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:22:26.570219   12698 cni.go:84] Creating CNI manager for ""
	I0729 10:22:26.570231   12698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:22:26.570237   12698 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:22:26.570286   12698 start.go:340] cluster config:
	{Name:addons-342031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-342031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:22:26.570368   12698 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:22:26.572152   12698 out.go:177] * Starting "addons-342031" primary control-plane node in "addons-342031" cluster
	I0729 10:22:26.573232   12698 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:22:26.573267   12698 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:22:26.573279   12698 cache.go:56] Caching tarball of preloaded images
	I0729 10:22:26.573357   12698 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:22:26.573370   12698 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:22:26.573697   12698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/config.json ...
	I0729 10:22:26.573722   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/config.json: {Name:mkb6347d0153e8c41bb0cc11c9c9fd0fb7c24f8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:22:26.573892   12698 start.go:360] acquireMachinesLock for addons-342031: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:22:26.573954   12698 start.go:364] duration metric: took 44.453µs to acquireMachinesLock for "addons-342031"
	I0729 10:22:26.573975   12698 start.go:93] Provisioning new machine with config: &{Name:addons-342031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-342031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:22:26.574046   12698 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 10:22:26.575638   12698 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 10:22:26.575757   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:22:26.575805   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:22:26.589669   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39733
	I0729 10:22:26.590158   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:22:26.590932   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:22:26.590956   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:22:26.591306   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:22:26.591524   12698 main.go:141] libmachine: (addons-342031) Calling .GetMachineName
	I0729 10:22:26.591687   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:26.591867   12698 start.go:159] libmachine.API.Create for "addons-342031" (driver="kvm2")
	I0729 10:22:26.591895   12698 client.go:168] LocalClient.Create starting
	I0729 10:22:26.591928   12698 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 10:22:26.787276   12698 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 10:22:26.935076   12698 main.go:141] libmachine: Running pre-create checks...
	I0729 10:22:26.935101   12698 main.go:141] libmachine: (addons-342031) Calling .PreCreateCheck
	I0729 10:22:26.935556   12698 main.go:141] libmachine: (addons-342031) Calling .GetConfigRaw
	I0729 10:22:26.935980   12698 main.go:141] libmachine: Creating machine...
	I0729 10:22:26.935997   12698 main.go:141] libmachine: (addons-342031) Calling .Create
	I0729 10:22:26.936139   12698 main.go:141] libmachine: (addons-342031) Creating KVM machine...
	I0729 10:22:26.937327   12698 main.go:141] libmachine: (addons-342031) DBG | found existing default KVM network
	I0729 10:22:26.938077   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:26.937934   12720 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0729 10:22:26.938108   12698 main.go:141] libmachine: (addons-342031) DBG | created network xml: 
	I0729 10:22:26.938122   12698 main.go:141] libmachine: (addons-342031) DBG | <network>
	I0729 10:22:26.938131   12698 main.go:141] libmachine: (addons-342031) DBG |   <name>mk-addons-342031</name>
	I0729 10:22:26.938136   12698 main.go:141] libmachine: (addons-342031) DBG |   <dns enable='no'/>
	I0729 10:22:26.938186   12698 main.go:141] libmachine: (addons-342031) DBG |   
	I0729 10:22:26.938220   12698 main.go:141] libmachine: (addons-342031) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 10:22:26.938264   12698 main.go:141] libmachine: (addons-342031) DBG |     <dhcp>
	I0729 10:22:26.938288   12698 main.go:141] libmachine: (addons-342031) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 10:22:26.938295   12698 main.go:141] libmachine: (addons-342031) DBG |     </dhcp>
	I0729 10:22:26.938301   12698 main.go:141] libmachine: (addons-342031) DBG |   </ip>
	I0729 10:22:26.938309   12698 main.go:141] libmachine: (addons-342031) DBG |   
	I0729 10:22:26.938319   12698 main.go:141] libmachine: (addons-342031) DBG | </network>
	I0729 10:22:26.938332   12698 main.go:141] libmachine: (addons-342031) DBG | 
	I0729 10:22:26.943466   12698 main.go:141] libmachine: (addons-342031) DBG | trying to create private KVM network mk-addons-342031 192.168.39.0/24...
	I0729 10:22:27.007201   12698 main.go:141] libmachine: (addons-342031) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031 ...
	I0729 10:22:27.007230   12698 main.go:141] libmachine: (addons-342031) DBG | private KVM network mk-addons-342031 192.168.39.0/24 created
	I0729 10:22:27.007250   12698 main.go:141] libmachine: (addons-342031) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:22:27.007265   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:27.007149   12720 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:22:27.007312   12698 main.go:141] libmachine: (addons-342031) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 10:22:27.271201   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:27.271085   12720 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa...
	I0729 10:22:27.433086   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:27.432945   12720 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/addons-342031.rawdisk...
	I0729 10:22:27.433114   12698 main.go:141] libmachine: (addons-342031) DBG | Writing magic tar header
	I0729 10:22:27.433128   12698 main.go:141] libmachine: (addons-342031) DBG | Writing SSH key tar header
	I0729 10:22:27.433140   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:27.433073   12720 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031 ...
	I0729 10:22:27.433222   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031
	I0729 10:22:27.433245   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 10:22:27.433254   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031 (perms=drwx------)
	I0729 10:22:27.433261   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:22:27.433269   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 10:22:27.433284   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 10:22:27.433294   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 10:22:27.433305   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 10:22:27.433314   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 10:22:27.433321   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 10:22:27.433329   12698 main.go:141] libmachine: (addons-342031) Creating domain...
	I0729 10:22:27.433340   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 10:22:27.433347   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins
	I0729 10:22:27.433353   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home
	I0729 10:22:27.433360   12698 main.go:141] libmachine: (addons-342031) DBG | Skipping /home - not owner
	I0729 10:22:27.434323   12698 main.go:141] libmachine: (addons-342031) define libvirt domain using xml: 
	I0729 10:22:27.434352   12698 main.go:141] libmachine: (addons-342031) <domain type='kvm'>
	I0729 10:22:27.434362   12698 main.go:141] libmachine: (addons-342031)   <name>addons-342031</name>
	I0729 10:22:27.434377   12698 main.go:141] libmachine: (addons-342031)   <memory unit='MiB'>4000</memory>
	I0729 10:22:27.434385   12698 main.go:141] libmachine: (addons-342031)   <vcpu>2</vcpu>
	I0729 10:22:27.434390   12698 main.go:141] libmachine: (addons-342031)   <features>
	I0729 10:22:27.434397   12698 main.go:141] libmachine: (addons-342031)     <acpi/>
	I0729 10:22:27.434401   12698 main.go:141] libmachine: (addons-342031)     <apic/>
	I0729 10:22:27.434405   12698 main.go:141] libmachine: (addons-342031)     <pae/>
	I0729 10:22:27.434410   12698 main.go:141] libmachine: (addons-342031)     
	I0729 10:22:27.434417   12698 main.go:141] libmachine: (addons-342031)   </features>
	I0729 10:22:27.434421   12698 main.go:141] libmachine: (addons-342031)   <cpu mode='host-passthrough'>
	I0729 10:22:27.434427   12698 main.go:141] libmachine: (addons-342031)   
	I0729 10:22:27.434438   12698 main.go:141] libmachine: (addons-342031)   </cpu>
	I0729 10:22:27.434444   12698 main.go:141] libmachine: (addons-342031)   <os>
	I0729 10:22:27.434448   12698 main.go:141] libmachine: (addons-342031)     <type>hvm</type>
	I0729 10:22:27.434479   12698 main.go:141] libmachine: (addons-342031)     <boot dev='cdrom'/>
	I0729 10:22:27.434503   12698 main.go:141] libmachine: (addons-342031)     <boot dev='hd'/>
	I0729 10:22:27.434515   12698 main.go:141] libmachine: (addons-342031)     <bootmenu enable='no'/>
	I0729 10:22:27.434530   12698 main.go:141] libmachine: (addons-342031)   </os>
	I0729 10:22:27.434542   12698 main.go:141] libmachine: (addons-342031)   <devices>
	I0729 10:22:27.434555   12698 main.go:141] libmachine: (addons-342031)     <disk type='file' device='cdrom'>
	I0729 10:22:27.434586   12698 main.go:141] libmachine: (addons-342031)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/boot2docker.iso'/>
	I0729 10:22:27.434599   12698 main.go:141] libmachine: (addons-342031)       <target dev='hdc' bus='scsi'/>
	I0729 10:22:27.434615   12698 main.go:141] libmachine: (addons-342031)       <readonly/>
	I0729 10:22:27.434632   12698 main.go:141] libmachine: (addons-342031)     </disk>
	I0729 10:22:27.434650   12698 main.go:141] libmachine: (addons-342031)     <disk type='file' device='disk'>
	I0729 10:22:27.434665   12698 main.go:141] libmachine: (addons-342031)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 10:22:27.434676   12698 main.go:141] libmachine: (addons-342031)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/addons-342031.rawdisk'/>
	I0729 10:22:27.434682   12698 main.go:141] libmachine: (addons-342031)       <target dev='hda' bus='virtio'/>
	I0729 10:22:27.434687   12698 main.go:141] libmachine: (addons-342031)     </disk>
	I0729 10:22:27.434692   12698 main.go:141] libmachine: (addons-342031)     <interface type='network'>
	I0729 10:22:27.434721   12698 main.go:141] libmachine: (addons-342031)       <source network='mk-addons-342031'/>
	I0729 10:22:27.434737   12698 main.go:141] libmachine: (addons-342031)       <model type='virtio'/>
	I0729 10:22:27.434748   12698 main.go:141] libmachine: (addons-342031)     </interface>
	I0729 10:22:27.434758   12698 main.go:141] libmachine: (addons-342031)     <interface type='network'>
	I0729 10:22:27.434770   12698 main.go:141] libmachine: (addons-342031)       <source network='default'/>
	I0729 10:22:27.434775   12698 main.go:141] libmachine: (addons-342031)       <model type='virtio'/>
	I0729 10:22:27.434782   12698 main.go:141] libmachine: (addons-342031)     </interface>
	I0729 10:22:27.434791   12698 main.go:141] libmachine: (addons-342031)     <serial type='pty'>
	I0729 10:22:27.434803   12698 main.go:141] libmachine: (addons-342031)       <target port='0'/>
	I0729 10:22:27.434817   12698 main.go:141] libmachine: (addons-342031)     </serial>
	I0729 10:22:27.434828   12698 main.go:141] libmachine: (addons-342031)     <console type='pty'>
	I0729 10:22:27.434842   12698 main.go:141] libmachine: (addons-342031)       <target type='serial' port='0'/>
	I0729 10:22:27.434858   12698 main.go:141] libmachine: (addons-342031)     </console>
	I0729 10:22:27.434866   12698 main.go:141] libmachine: (addons-342031)     <rng model='virtio'>
	I0729 10:22:27.434874   12698 main.go:141] libmachine: (addons-342031)       <backend model='random'>/dev/random</backend>
	I0729 10:22:27.434883   12698 main.go:141] libmachine: (addons-342031)     </rng>
	I0729 10:22:27.434900   12698 main.go:141] libmachine: (addons-342031)     
	I0729 10:22:27.434916   12698 main.go:141] libmachine: (addons-342031)     
	I0729 10:22:27.434928   12698 main.go:141] libmachine: (addons-342031)   </devices>
	I0729 10:22:27.434939   12698 main.go:141] libmachine: (addons-342031) </domain>
	I0729 10:22:27.434953   12698 main.go:141] libmachine: (addons-342031) 
	I0729 10:22:27.440547   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:ff:f9:9d in network default
	I0729 10:22:27.441089   12698 main.go:141] libmachine: (addons-342031) Ensuring networks are active...
	I0729 10:22:27.441108   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:27.441734   12698 main.go:141] libmachine: (addons-342031) Ensuring network default is active
	I0729 10:22:27.442102   12698 main.go:141] libmachine: (addons-342031) Ensuring network mk-addons-342031 is active
	I0729 10:22:27.442532   12698 main.go:141] libmachine: (addons-342031) Getting domain xml...
	I0729 10:22:27.443154   12698 main.go:141] libmachine: (addons-342031) Creating domain...
	I0729 10:22:28.821137   12698 main.go:141] libmachine: (addons-342031) Waiting to get IP...
	I0729 10:22:28.822041   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:28.822408   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:28.822503   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:28.822437   12720 retry.go:31] will retry after 195.541091ms: waiting for machine to come up
	I0729 10:22:29.019770   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:29.020248   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:29.020277   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:29.020177   12720 retry.go:31] will retry after 309.221715ms: waiting for machine to come up
	I0729 10:22:29.330544   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:29.330982   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:29.331003   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:29.330938   12720 retry.go:31] will retry after 355.964011ms: waiting for machine to come up
	I0729 10:22:29.688385   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:29.688926   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:29.688954   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:29.688887   12720 retry.go:31] will retry after 484.927173ms: waiting for machine to come up
	I0729 10:22:30.175884   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:30.176403   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:30.176442   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:30.176354   12720 retry.go:31] will retry after 689.808028ms: waiting for machine to come up
	I0729 10:22:30.868197   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:30.868660   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:30.868685   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:30.868578   12720 retry.go:31] will retry after 916.035718ms: waiting for machine to come up
	I0729 10:22:31.786379   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:31.786834   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:31.786865   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:31.786752   12720 retry.go:31] will retry after 751.473166ms: waiting for machine to come up
	I0729 10:22:32.539734   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:32.540095   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:32.540116   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:32.540058   12720 retry.go:31] will retry after 988.862367ms: waiting for machine to come up
	I0729 10:22:33.530089   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:33.530398   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:33.530426   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:33.530345   12720 retry.go:31] will retry after 1.4355459s: waiting for machine to come up
	I0729 10:22:34.967825   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:34.968197   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:34.968221   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:34.968154   12720 retry.go:31] will retry after 1.673804403s: waiting for machine to come up
	I0729 10:22:36.643776   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:36.644310   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:36.644334   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:36.644248   12720 retry.go:31] will retry after 2.552383352s: waiting for machine to come up
	I0729 10:22:39.199894   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:39.200354   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:39.200383   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:39.200305   12720 retry.go:31] will retry after 2.297424729s: waiting for machine to come up
	I0729 10:22:41.500667   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:41.501034   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:41.501053   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:41.500991   12720 retry.go:31] will retry after 3.517350765s: waiting for machine to come up
	I0729 10:22:45.022370   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:45.022689   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:45.022733   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:45.022649   12720 retry.go:31] will retry after 4.782196854s: waiting for machine to come up
	I0729 10:22:49.807334   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:49.807781   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has current primary IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:49.807799   12698 main.go:141] libmachine: (addons-342031) Found IP for machine: 192.168.39.224
	I0729 10:22:49.807812   12698 main.go:141] libmachine: (addons-342031) Reserving static IP address...
	I0729 10:22:49.808145   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find host DHCP lease matching {name: "addons-342031", mac: "52:54:00:26:46:e4", ip: "192.168.39.224"} in network mk-addons-342031
	I0729 10:22:49.878329   12698 main.go:141] libmachine: (addons-342031) DBG | Getting to WaitForSSH function...
	I0729 10:22:49.878359   12698 main.go:141] libmachine: (addons-342031) Reserved static IP address: 192.168.39.224
	I0729 10:22:49.878373   12698 main.go:141] libmachine: (addons-342031) Waiting for SSH to be available...
	I0729 10:22:49.880810   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:49.881081   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031
	I0729 10:22:49.881118   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find defined IP address of network mk-addons-342031 interface with MAC address 52:54:00:26:46:e4
	I0729 10:22:49.881269   12698 main.go:141] libmachine: (addons-342031) DBG | Using SSH client type: external
	I0729 10:22:49.881311   12698 main.go:141] libmachine: (addons-342031) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa (-rw-------)
	I0729 10:22:49.881350   12698 main.go:141] libmachine: (addons-342031) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:22:49.881364   12698 main.go:141] libmachine: (addons-342031) DBG | About to run SSH command:
	I0729 10:22:49.881380   12698 main.go:141] libmachine: (addons-342031) DBG | exit 0
	I0729 10:22:49.892439   12698 main.go:141] libmachine: (addons-342031) DBG | SSH cmd err, output: exit status 255: 
	I0729 10:22:49.892464   12698 main.go:141] libmachine: (addons-342031) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 10:22:49.892476   12698 main.go:141] libmachine: (addons-342031) DBG | command : exit 0
	I0729 10:22:49.892484   12698 main.go:141] libmachine: (addons-342031) DBG | err     : exit status 255
	I0729 10:22:49.892495   12698 main.go:141] libmachine: (addons-342031) DBG | output  : 
	I0729 10:22:52.892685   12698 main.go:141] libmachine: (addons-342031) DBG | Getting to WaitForSSH function...
	I0729 10:22:52.895786   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:52.896257   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:52.896286   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:52.896350   12698 main.go:141] libmachine: (addons-342031) DBG | Using SSH client type: external
	I0729 10:22:52.896369   12698 main.go:141] libmachine: (addons-342031) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa (-rw-------)
	I0729 10:22:52.896470   12698 main.go:141] libmachine: (addons-342031) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:22:52.896491   12698 main.go:141] libmachine: (addons-342031) DBG | About to run SSH command:
	I0729 10:22:52.896501   12698 main.go:141] libmachine: (addons-342031) DBG | exit 0
	I0729 10:22:53.018886   12698 main.go:141] libmachine: (addons-342031) DBG | SSH cmd err, output: <nil>: 
	I0729 10:22:53.019180   12698 main.go:141] libmachine: (addons-342031) KVM machine creation complete!
	I0729 10:22:53.019451   12698 main.go:141] libmachine: (addons-342031) Calling .GetConfigRaw
	I0729 10:22:53.019957   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:53.020152   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:53.020314   12698 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 10:22:53.020328   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:22:53.021481   12698 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 10:22:53.021498   12698 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 10:22:53.021506   12698 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 10:22:53.021512   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.023922   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.024313   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.024340   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.024467   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:53.024675   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.024862   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.024989   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:53.025141   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:53.025321   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:53.025342   12698 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 10:22:53.122113   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:22:53.122139   12698 main.go:141] libmachine: Detecting the provisioner...
	I0729 10:22:53.122147   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.125038   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.125408   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.125443   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.125641   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:53.125831   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.125981   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.126106   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:53.126241   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:53.126388   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:53.126397   12698 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 10:22:53.223389   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 10:22:53.223476   12698 main.go:141] libmachine: found compatible host: buildroot
	I0729 10:22:53.223490   12698 main.go:141] libmachine: Provisioning with buildroot...
	I0729 10:22:53.223503   12698 main.go:141] libmachine: (addons-342031) Calling .GetMachineName
	I0729 10:22:53.223736   12698 buildroot.go:166] provisioning hostname "addons-342031"
	I0729 10:22:53.223759   12698 main.go:141] libmachine: (addons-342031) Calling .GetMachineName
	I0729 10:22:53.223929   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.226140   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.226436   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.226462   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.226601   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:53.226780   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.226910   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.227023   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:53.227197   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:53.227361   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:53.227374   12698 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-342031 && echo "addons-342031" | sudo tee /etc/hostname
	I0729 10:22:53.341863   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-342031
	
	I0729 10:22:53.341887   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.344719   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.345165   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.345189   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.345404   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:53.345584   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.345742   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.345886   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:53.346079   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:53.346233   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:53.346249   12698 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-342031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-342031/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-342031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:22:53.452067   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:22:53.452105   12698 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 10:22:53.452157   12698 buildroot.go:174] setting up certificates
	I0729 10:22:53.452171   12698 provision.go:84] configureAuth start
	I0729 10:22:53.452190   12698 main.go:141] libmachine: (addons-342031) Calling .GetMachineName
	I0729 10:22:53.452461   12698 main.go:141] libmachine: (addons-342031) Calling .GetIP
	I0729 10:22:53.454977   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.455267   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.455293   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.455421   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.457342   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.457631   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.457656   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.457784   12698 provision.go:143] copyHostCerts
	I0729 10:22:53.457848   12698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 10:22:53.457998   12698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 10:22:53.458083   12698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 10:22:53.458145   12698 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.addons-342031 san=[127.0.0.1 192.168.39.224 addons-342031 localhost minikube]
	I0729 10:22:53.970609   12698 provision.go:177] copyRemoteCerts
	I0729 10:22:53.970664   12698 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:22:53.970686   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.973214   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.973517   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.973546   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.973663   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:53.973843   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.974015   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:53.974148   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:22:54.052636   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:22:54.076701   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 10:22:54.099658   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:22:54.123583   12698 provision.go:87] duration metric: took 671.393634ms to configureAuth
	I0729 10:22:54.123608   12698 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:22:54.123767   12698 config.go:182] Loaded profile config "addons-342031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:22:54.123841   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:54.126445   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.126735   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.126768   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.126938   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:54.127156   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.127357   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.127488   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:54.127623   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:54.127789   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:54.127804   12698 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:22:54.385444   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:22:54.385474   12698 main.go:141] libmachine: Checking connection to Docker...
	I0729 10:22:54.385506   12698 main.go:141] libmachine: (addons-342031) Calling .GetURL
	I0729 10:22:54.386942   12698 main.go:141] libmachine: (addons-342031) DBG | Using libvirt version 6000000
	I0729 10:22:54.389227   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.389581   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.389612   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.389711   12698 main.go:141] libmachine: Docker is up and running!
	I0729 10:22:54.389725   12698 main.go:141] libmachine: Reticulating splines...
	I0729 10:22:54.389733   12698 client.go:171] duration metric: took 27.797830436s to LocalClient.Create
	I0729 10:22:54.389758   12698 start.go:167] duration metric: took 27.797890326s to libmachine.API.Create "addons-342031"
	I0729 10:22:54.389771   12698 start.go:293] postStartSetup for "addons-342031" (driver="kvm2")
	I0729 10:22:54.389784   12698 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:22:54.389799   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:54.390023   12698 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:22:54.390044   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:54.392254   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.392587   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.392607   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.392800   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:54.392956   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.393122   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:54.393232   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:22:54.473735   12698 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:22:54.477956   12698 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:22:54.477982   12698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 10:22:54.478071   12698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 10:22:54.478102   12698 start.go:296] duration metric: took 88.323931ms for postStartSetup
	I0729 10:22:54.478160   12698 main.go:141] libmachine: (addons-342031) Calling .GetConfigRaw
	I0729 10:22:54.478694   12698 main.go:141] libmachine: (addons-342031) Calling .GetIP
	I0729 10:22:54.481118   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.481465   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.481488   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.481749   12698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/config.json ...
	I0729 10:22:54.481929   12698 start.go:128] duration metric: took 27.907873744s to createHost
	I0729 10:22:54.481958   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:54.484131   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.484454   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.484474   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.484596   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:54.484860   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.485017   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.485155   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:54.485332   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:54.485490   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:54.485502   12698 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:22:54.583486   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722248574.564028643
	
	I0729 10:22:54.583508   12698 fix.go:216] guest clock: 1722248574.564028643
	I0729 10:22:54.583514   12698 fix.go:229] Guest: 2024-07-29 10:22:54.564028643 +0000 UTC Remote: 2024-07-29 10:22:54.481940225 +0000 UTC m=+28.006298665 (delta=82.088418ms)
	I0729 10:22:54.583556   12698 fix.go:200] guest clock delta is within tolerance: 82.088418ms
	I0729 10:22:54.583567   12698 start.go:83] releasing machines lock for "addons-342031", held for 28.009600176s
	I0729 10:22:54.583591   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:54.583838   12698 main.go:141] libmachine: (addons-342031) Calling .GetIP
	I0729 10:22:54.586516   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.586951   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.586976   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.587111   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:54.587559   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:54.587734   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:54.587811   12698 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:22:54.587853   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:54.587923   12698 ssh_runner.go:195] Run: cat /version.json
	I0729 10:22:54.587946   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:54.590516   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.590664   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.590985   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.591018   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.591037   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.591083   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.591150   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:54.591302   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:54.591363   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.591464   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.591577   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:54.591674   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:54.591740   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:22:54.591777   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:22:54.663824   12698 ssh_runner.go:195] Run: systemctl --version
	I0729 10:22:54.691554   12698 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:22:54.849005   12698 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:22:54.855374   12698 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:22:54.855464   12698 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:22:54.871260   12698 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:22:54.871285   12698 start.go:495] detecting cgroup driver to use...
	I0729 10:22:54.871351   12698 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:22:54.886915   12698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:22:54.900702   12698 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:22:54.900751   12698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:22:54.913925   12698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:22:54.926818   12698 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:22:55.037036   12698 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:22:55.181561   12698 docker.go:233] disabling docker service ...
	I0729 10:22:55.181621   12698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:22:55.202224   12698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:22:55.215904   12698 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:22:55.363613   12698 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:22:55.484120   12698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:22:55.499134   12698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:22:55.518491   12698 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:22:55.518539   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.529356   12698 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:22:55.529436   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.540203   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.550853   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.561568   12698 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:22:55.572562   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.583485   12698 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.601219   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.612562   12698 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:22:55.622499   12698 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 10:22:55.622614   12698 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 10:22:55.635857   12698 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:22:55.646240   12698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:22:55.763578   12698 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:22:55.908295   12698 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:22:55.908386   12698 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:22:55.912856   12698 start.go:563] Will wait 60s for crictl version
	I0729 10:22:55.912912   12698 ssh_runner.go:195] Run: which crictl
	I0729 10:22:55.916591   12698 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:22:55.956978   12698 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:22:55.957108   12698 ssh_runner.go:195] Run: crio --version
	I0729 10:22:55.985476   12698 ssh_runner.go:195] Run: crio --version
	I0729 10:22:56.017764   12698 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:22:56.019180   12698 main.go:141] libmachine: (addons-342031) Calling .GetIP
	I0729 10:22:56.021767   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:56.022099   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:56.022117   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:56.022337   12698 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:22:56.026399   12698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:22:56.038809   12698 kubeadm.go:883] updating cluster {Name:addons-342031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-342031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 10:22:56.038918   12698 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:22:56.038958   12698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:22:56.070365   12698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 10:22:56.070436   12698 ssh_runner.go:195] Run: which lz4
	I0729 10:22:56.074372   12698 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 10:22:56.078477   12698 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:22:56.078508   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 10:22:57.482788   12698 crio.go:462] duration metric: took 1.408437287s to copy over tarball
	I0729 10:22:57.482866   12698 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:22:59.818324   12698 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.335433207s)
	I0729 10:22:59.818354   12698 crio.go:469] duration metric: took 2.335534339s to extract the tarball
	I0729 10:22:59.818369   12698 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:22:59.856818   12698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:22:59.899098   12698 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 10:22:59.899133   12698 cache_images.go:84] Images are preloaded, skipping loading
	I0729 10:22:59.899143   12698 kubeadm.go:934] updating node { 192.168.39.224 8443 v1.30.3 crio true true} ...
	I0729 10:22:59.899262   12698 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-342031 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-342031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:22:59.899330   12698 ssh_runner.go:195] Run: crio config
	I0729 10:22:59.945082   12698 cni.go:84] Creating CNI manager for ""
	I0729 10:22:59.945105   12698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:22:59.945118   12698 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:22:59.945147   12698 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-342031 NodeName:addons-342031 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:22:59.945301   12698 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-342031"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:22:59.945359   12698 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:22:59.955352   12698 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:22:59.955430   12698 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 10:22:59.964931   12698 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 10:22:59.981626   12698 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:22:59.998252   12698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 10:23:00.016127   12698 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0729 10:23:00.020257   12698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:23:00.033296   12698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:23:00.144045   12698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:23:00.161025   12698 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031 for IP: 192.168.39.224
	I0729 10:23:00.161050   12698 certs.go:194] generating shared ca certs ...
	I0729 10:23:00.161069   12698 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.161227   12698 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 10:23:00.551886   12698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt ...
	I0729 10:23:00.551921   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt: {Name:mka8cf7129dad81b43b458c80907bb582a244c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.552123   12698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key ...
	I0729 10:23:00.552140   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key: {Name:mk0d4a0975e994627d0a57853c3533e5941aaaed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.552251   12698 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 10:23:00.675483   12698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt ...
	I0729 10:23:00.675516   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt: {Name:mk97ca6b11acfe37f69f07b0ad2f80f38e3821b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.675706   12698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key ...
	I0729 10:23:00.675721   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key: {Name:mkc24eedf8d015704d0c2fb9cb7ecfdd6327465e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.675829   12698 certs.go:256] generating profile certs ...
	I0729 10:23:00.675909   12698 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.key
	I0729 10:23:00.675932   12698 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt with IP's: []
	I0729 10:23:00.855889   12698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt ...
	I0729 10:23:00.855920   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: {Name:mk37221677c567e713e2630239b01169668a5d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.856117   12698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.key ...
	I0729 10:23:00.856133   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.key: {Name:mkb51aeddb9569ca59fd2c15435e5e96e355f414 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.856243   12698 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key.0accf3d3
	I0729 10:23:00.856265   12698 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt.0accf3d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.224]
	I0729 10:23:00.989586   12698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt.0accf3d3 ...
	I0729 10:23:00.989617   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt.0accf3d3: {Name:mke401fbeba18fa1c710817d8169aadd5ba6547c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.989775   12698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key.0accf3d3 ...
	I0729 10:23:00.989792   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key.0accf3d3: {Name:mk0e7aed75425bcb0779fff6de3d79143d9c1b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.989868   12698 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt.0accf3d3 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt
	I0729 10:23:00.989947   12698 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key.0accf3d3 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key
	I0729 10:23:00.989998   12698 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.key
	I0729 10:23:00.990018   12698 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.crt with IP's: []
	I0729 10:23:01.314898   12698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.crt ...
	I0729 10:23:01.314934   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.crt: {Name:mk37096704201e38d1ef496c8563f06c21b8bd10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:01.315093   12698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.key ...
	I0729 10:23:01.315103   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.key: {Name:mk4912fd096558512f1a3b241f31bad5af303652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:01.315258   12698 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:23:01.315290   12698 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:23:01.315314   12698 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:23:01.315337   12698 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 10:23:01.315934   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:23:01.344813   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:23:01.370004   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:23:01.395105   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:23:01.421073   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 10:23:01.445856   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 10:23:01.470509   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:23:01.495201   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:23:01.525292   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:23:01.549446   12698 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:23:01.567088   12698 ssh_runner.go:195] Run: openssl version
	I0729 10:23:01.572909   12698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:23:01.584523   12698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:23:01.589210   12698 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:23:01.589277   12698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:23:01.595205   12698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:23:01.606920   12698 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:23:01.610964   12698 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 10:23:01.611009   12698 kubeadm.go:392] StartCluster: {Name:addons-342031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-342031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:23:01.611076   12698 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 10:23:01.611114   12698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 10:23:01.656041   12698 cri.go:89] found id: ""
	I0729 10:23:01.656108   12698 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:23:01.666845   12698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:23:01.677056   12698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:23:01.687121   12698 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:23:01.687144   12698 kubeadm.go:157] found existing configuration files:
	
	I0729 10:23:01.687197   12698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 10:23:01.696586   12698 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:23:01.696642   12698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:23:01.706741   12698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 10:23:01.716220   12698 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:23:01.716272   12698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:23:01.726527   12698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 10:23:01.736199   12698 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:23:01.736253   12698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:23:01.749229   12698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 10:23:01.759335   12698 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:23:01.759397   12698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:23:01.776945   12698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:23:01.967268   12698 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:23:12.140583   12698 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 10:23:12.140651   12698 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:23:12.140737   12698 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:23:12.140851   12698 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:23:12.140931   12698 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:23:12.140982   12698 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:23:12.143333   12698 out.go:204]   - Generating certificates and keys ...
	I0729 10:23:12.143406   12698 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:23:12.143453   12698 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:23:12.143530   12698 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 10:23:12.143592   12698 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 10:23:12.143669   12698 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 10:23:12.143722   12698 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 10:23:12.143769   12698 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 10:23:12.143889   12698 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-342031 localhost] and IPs [192.168.39.224 127.0.0.1 ::1]
	I0729 10:23:12.143953   12698 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 10:23:12.144052   12698 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-342031 localhost] and IPs [192.168.39.224 127.0.0.1 ::1]
	I0729 10:23:12.144111   12698 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 10:23:12.144168   12698 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 10:23:12.144243   12698 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 10:23:12.144295   12698 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:23:12.144338   12698 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:23:12.144391   12698 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 10:23:12.144463   12698 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:23:12.144540   12698 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:23:12.144625   12698 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:23:12.144753   12698 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:23:12.144839   12698 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:23:12.146425   12698 out.go:204]   - Booting up control plane ...
	I0729 10:23:12.146500   12698 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:23:12.146575   12698 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:23:12.146663   12698 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:23:12.146807   12698 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:23:12.146893   12698 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:23:12.146953   12698 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:23:12.147111   12698 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 10:23:12.147186   12698 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 10:23:12.147268   12698 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.684726ms
	I0729 10:23:12.147373   12698 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 10:23:12.147435   12698 kubeadm.go:310] [api-check] The API server is healthy after 5.502151656s
	I0729 10:23:12.147574   12698 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:23:12.147690   12698 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:23:12.147735   12698 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:23:12.147879   12698 kubeadm.go:310] [mark-control-plane] Marking the node addons-342031 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:23:12.147926   12698 kubeadm.go:310] [bootstrap-token] Using token: smwj70.27f0grtxfr80dwmz
	I0729 10:23:12.149231   12698 out.go:204]   - Configuring RBAC rules ...
	I0729 10:23:12.149321   12698 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:23:12.149402   12698 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:23:12.149524   12698 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:23:12.149631   12698 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:23:12.149726   12698 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:23:12.149800   12698 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:23:12.149896   12698 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:23:12.149936   12698 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:23:12.149975   12698 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:23:12.149981   12698 kubeadm.go:310] 
	I0729 10:23:12.150029   12698 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:23:12.150035   12698 kubeadm.go:310] 
	I0729 10:23:12.150100   12698 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:23:12.150105   12698 kubeadm.go:310] 
	I0729 10:23:12.150134   12698 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:23:12.150213   12698 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:23:12.150286   12698 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:23:12.150296   12698 kubeadm.go:310] 
	I0729 10:23:12.150377   12698 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:23:12.150388   12698 kubeadm.go:310] 
	I0729 10:23:12.150456   12698 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:23:12.150465   12698 kubeadm.go:310] 
	I0729 10:23:12.150535   12698 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:23:12.150645   12698 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:23:12.150770   12698 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:23:12.150780   12698 kubeadm.go:310] 
	I0729 10:23:12.150907   12698 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:23:12.151005   12698 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:23:12.151012   12698 kubeadm.go:310] 
	I0729 10:23:12.151249   12698 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token smwj70.27f0grtxfr80dwmz \
	I0729 10:23:12.151369   12698 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 10:23:12.151397   12698 kubeadm.go:310] 	--control-plane 
	I0729 10:23:12.151404   12698 kubeadm.go:310] 
	I0729 10:23:12.151470   12698 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:23:12.151478   12698 kubeadm.go:310] 
	I0729 10:23:12.151556   12698 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token smwj70.27f0grtxfr80dwmz \
	I0729 10:23:12.151723   12698 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 10:23:12.151741   12698 cni.go:84] Creating CNI manager for ""
	I0729 10:23:12.151751   12698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:23:12.154038   12698 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:23:12.155580   12698 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:23:12.167254   12698 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:23:12.186947   12698 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:23:12.186991   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:12.187040   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-342031 minikube.k8s.io/updated_at=2024_07_29T10_23_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=addons-342031 minikube.k8s.io/primary=true
	I0729 10:23:12.314257   12698 ops.go:34] apiserver oom_adj: -16
	I0729 10:23:12.327612   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:12.828102   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:13.327797   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:13.828497   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:14.327873   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:14.827777   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:15.327749   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:15.828518   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:16.327730   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:16.827722   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:17.328319   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:17.827752   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:18.327751   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:18.827714   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:19.328338   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:19.828572   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:20.328400   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:20.828491   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:21.327620   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:21.828511   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:22.328498   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:22.827985   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:23.327693   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:23.827857   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:24.327976   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:24.828269   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:24.947888   12698 kubeadm.go:1113] duration metric: took 12.760943739s to wait for elevateKubeSystemPrivileges
	I0729 10:23:24.947913   12698 kubeadm.go:394] duration metric: took 23.336907921s to StartCluster
	I0729 10:23:24.947928   12698 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:24.948028   12698 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:23:24.948409   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:24.948578   12698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 10:23:24.948600   12698 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:23:24.948667   12698 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 10:23:24.948752   12698 addons.go:69] Setting yakd=true in profile "addons-342031"
	I0729 10:23:24.948765   12698 config.go:182] Loaded profile config "addons-342031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:23:24.948778   12698 addons.go:69] Setting inspektor-gadget=true in profile "addons-342031"
	I0729 10:23:24.948794   12698 addons.go:234] Setting addon yakd=true in "addons-342031"
	I0729 10:23:24.948808   12698 addons.go:69] Setting volcano=true in profile "addons-342031"
	I0729 10:23:24.948816   12698 addons.go:234] Setting addon inspektor-gadget=true in "addons-342031"
	I0729 10:23:24.948831   12698 addons.go:234] Setting addon volcano=true in "addons-342031"
	I0729 10:23:24.948853   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.948855   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.948858   12698 addons.go:69] Setting cloud-spanner=true in profile "addons-342031"
	I0729 10:23:24.948862   12698 addons.go:69] Setting metrics-server=true in profile "addons-342031"
	I0729 10:23:24.948875   12698 addons.go:234] Setting addon cloud-spanner=true in "addons-342031"
	I0729 10:23:24.948883   12698 addons.go:234] Setting addon metrics-server=true in "addons-342031"
	I0729 10:23:24.948896   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.948816   12698 addons.go:69] Setting storage-provisioner=true in profile "addons-342031"
	I0729 10:23:24.948918   12698 addons.go:69] Setting gcp-auth=true in profile "addons-342031"
	I0729 10:23:24.948933   12698 mustload.go:65] Loading cluster: addons-342031
	I0729 10:23:24.948941   12698 addons.go:234] Setting addon storage-provisioner=true in "addons-342031"
	I0729 10:23:24.948974   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.949104   12698 config.go:182] Loaded profile config "addons-342031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:23:24.949280   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949283   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949301   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949304   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.949316   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.949394   12698 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-342031"
	I0729 10:23:24.949408   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949419   12698 addons.go:69] Setting default-storageclass=true in profile "addons-342031"
	I0729 10:23:24.949438   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.949447   12698 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-342031"
	I0729 10:23:24.949456   12698 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-342031"
	I0729 10:23:24.948853   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.949485   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.949542   12698 addons.go:69] Setting ingress=true in profile "addons-342031"
	I0729 10:23:24.949566   12698 addons.go:234] Setting addon ingress=true in "addons-342031"
	I0729 10:23:24.948839   12698 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-342031"
	I0729 10:23:24.949629   12698 addons.go:69] Setting helm-tiller=true in profile "addons-342031"
	I0729 10:23:24.949647   12698 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-342031"
	I0729 10:23:24.949655   12698 addons.go:234] Setting addon helm-tiller=true in "addons-342031"
	I0729 10:23:24.949675   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.949624   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.949762   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949779   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.949794   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949301   12698 addons.go:69] Setting volumesnapshots=true in profile "addons-342031"
	I0729 10:23:24.949833   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.949844   12698 addons.go:234] Setting addon volumesnapshots=true in "addons-342031"
	I0729 10:23:24.949866   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.950041   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950069   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950179   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950206   12698 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-342031"
	I0729 10:23:24.949675   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.950228   12698 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-342031"
	I0729 10:23:24.948909   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.950241   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950262   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950558   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949412   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950573   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950585   12698 addons.go:69] Setting registry=true in profile "addons-342031"
	I0729 10:23:24.950591   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950605   12698 addons.go:234] Setting addon registry=true in "addons-342031"
	I0729 10:23:24.950610   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950632   12698 addons.go:69] Setting ingress-dns=true in profile "addons-342031"
	I0729 10:23:24.950651   12698 addons.go:234] Setting addon ingress-dns=true in "addons-342031"
	I0729 10:23:24.950658   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950670   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950558   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950769   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950810   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950962   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.951507   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.951574   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.961635   12698 out.go:177] * Verifying Kubernetes components...
	I0729 10:23:24.950230   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950593   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.962397   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.962784   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.962811   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.963522   12698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:23:24.970252   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0729 10:23:24.972091   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0729 10:23:24.972406   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.972714   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.973141   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.973158   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.973427   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.973444   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.973598   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.974268   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.974600   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.974626   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.974852   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.974900   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.979595   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I0729 10:23:24.980093   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.980637   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.980660   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.981028   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.981188   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:24.982833   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46805
	I0729 10:23:24.983150   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.983667   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.983682   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.984002   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.985017   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.985042   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.991170   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0729 10:23:24.992006   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0729 10:23:24.992240   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.992505   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.992983   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.993005   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.993346   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.993743   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.993766   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.994311   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.994367   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.994850   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.995381   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.995413   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.002250   12698 addons.go:234] Setting addon default-storageclass=true in "addons-342031"
	I0729 10:23:25.002293   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:25.002642   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.002668   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.005565   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0729 10:23:25.007184   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.007941   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.007967   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.008445   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.008696   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.011193   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.013230   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I0729 10:23:25.013767   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.013838   12698 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:23:25.014097   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45997
	I0729 10:23:25.014465   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.014717   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.014732   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.015310   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.015327   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.015590   12698 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:23:25.015605   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:23:25.015622   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.015971   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.016149   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.016399   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0729 10:23:25.016726   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.017596   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38503
	I0729 10:23:25.017815   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.017835   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.017900   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I0729 10:23:25.018243   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.018326   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.018525   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.018580   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.019054   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.019073   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.019496   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.019640   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.019654   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.020600   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.020625   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.020836   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.020899   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:25.021247   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.021276   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.021473   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.021493   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.021510   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.021856   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40947
	I0729 10:23:25.021991   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.022042   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.022256   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.022309   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.022503   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.022556   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.022584   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.022994   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.023304   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.023351   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.023656   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.024024   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.024040   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.024365   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.024834   12698 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0729 10:23:25.024905   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.024932   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.025153   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
	I0729 10:23:25.025300   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45107
	I0729 10:23:25.025729   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.026043   12698 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 10:23:25.026060   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 10:23:25.026080   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.026878   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.026900   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.027360   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.027850   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.027890   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.030825   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.031017   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0729 10:23:25.031245   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.031348   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.031371   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.031528   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.031744   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.031938   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.032106   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.032475   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.032491   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.032761   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.033346   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.033365   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.033749   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.033822   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.034395   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.034431   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.036243   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
	I0729 10:23:25.036592   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.037128   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.037149   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.038036   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.038304   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.039760   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.039800   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.042634   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0729 10:23:25.043273   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.043797   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.043826   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.044138   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.044341   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.046646   12698 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-342031"
	I0729 10:23:25.046688   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:25.047047   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.047081   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.047295   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0729 10:23:25.047833   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.048354   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.048369   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.048727   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.049263   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.049298   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.050334   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.050635   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:25.050650   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:25.050999   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:25.051015   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:25.051026   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:25.051038   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:25.051045   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:25.051276   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:25.051287   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 10:23:25.051368   12698 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 10:23:25.053978   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I0729 10:23:25.054406   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.054894   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.054918   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.055950   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.056454   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.056496   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.056680   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0729 10:23:25.057138   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.057697   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.057721   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.058112   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.064090   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0729 10:23:25.064520   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.064592   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.065739   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.065758   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.065987   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0729 10:23:25.066144   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I0729 10:23:25.066172   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.066455   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.066620   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.067165   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.067183   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.067593   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.067842   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.068064   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0729 10:23:25.068218   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.068955   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.069706   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.069724   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.070025   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.070251   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 10:23:25.070279   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.070349   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.070862   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.070951   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.072127   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.072144   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.072392   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 10:23:25.072449   12698 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 10:23:25.072456   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.072475   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 10:23:25.072593   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.073325   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0729 10:23:25.073738   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.074241   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.074253   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.074639   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.074805   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.074856   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.075784   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 10:23:25.075816   12698 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 10:23:25.075845   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 10:23:25.076328   12698 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 10:23:25.076349   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.077077   12698 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 10:23:25.077157   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.078115   12698 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 10:23:25.078266   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 10:23:25.078284   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.079124   12698 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 10:23:25.079140   12698 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 10:23:25.079167   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.079673   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 10:23:25.079865   12698 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 10:23:25.081107   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 10:23:25.081192   12698 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:23:25.082402   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0729 10:23:25.082903   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.082950   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.083446   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.083468   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.084127   12698 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:23:25.084185   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 10:23:25.084287   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.084425   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.084437   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.084514   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.084573   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I0729 10:23:25.084963   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.085045   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.085343   12698 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 10:23:25.085360   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 10:23:25.085376   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.085477   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.085644   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.085678   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.085984   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.086397   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.086426   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.086821   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.087096   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.087567   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.087917   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 10:23:25.088206   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.088466   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33211
	I0729 10:23:25.088848   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.088989   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.089429   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.089450   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.089586   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.089716   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.089776   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.089840   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.089852   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.090171   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.090201   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.090231   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.090279   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.090744   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.090762   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.091105   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.091259   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.091444   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.091752   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.092153   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 10:23:25.092362   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35493
	I0729 10:23:25.092465   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0729 10:23:25.092576   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.092866   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.093215   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.093235   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.093471   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.093488   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.093505   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 10:23:25.093519   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 10:23:25.093534   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.093550   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.093579   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.093646   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.093804   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.093987   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.094024   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.094360   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.094382   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.094743   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.095352   12698 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 10:23:25.095587   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.095900   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.096668   12698 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 10:23:25.096683   12698 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 10:23:25.096700   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.097276   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40147
	I0729 10:23:25.097398   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.097574   12698 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 10:23:25.097702   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.097715   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.098245   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.098269   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.098624   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.098797   12698 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 10:23:25.098814   12698 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 10:23:25.098829   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.098839   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.099448   12698 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 10:23:25.099455   12698 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 10:23:25.100490   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.100959   12698 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 10:23:25.100973   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 10:23:25.100998   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.101096   12698 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 10:23:25.101109   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 10:23:25.101126   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.102119   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.102144   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.102174   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.102416   12698 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:23:25.102428   12698 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:23:25.102451   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.102635   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.102808   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.102967   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.103118   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.103128   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.103679   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.103714   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.104028   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0729 10:23:25.104191   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.104332   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.104403   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.104514   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.104819   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.105160   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.105177   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.105657   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.106201   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.106365   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.107474   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.107517   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.107546   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.107703   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.107856   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.107969   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.107999   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39869
	I0729 10:23:25.108023   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.108189   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.108404   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.108421   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.108456   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.108809   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.108931   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.108945   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.108984   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.109073   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.109100   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.109212   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.109448   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.109639   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.109667   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.109709   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.109723   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.109956   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.110115   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.110170   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.110194   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.110520   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.110745   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.110962   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.111131   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.111230   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.111306   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.111427   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.111867   12698 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 10:23:25.112661   12698 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W0729 10:23:25.113248   12698 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43098->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.113268   12698 retry.go:31] will retry after 258.973636ms: ssh: handshake failed: read tcp 192.168.39.1:43098->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.113457   12698 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 10:23:25.113471   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 10:23:25.113483   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.115120   12698 out.go:177]   - Using image docker.io/busybox:stable
	I0729 10:23:25.116226   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.116244   12698 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 10:23:25.116257   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 10:23:25.116276   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.116623   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.116647   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.116795   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.116948   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.117094   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.117209   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	W0729 10:23:25.117796   12698 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43114->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.117822   12698 retry.go:31] will retry after 323.118222ms: ssh: handshake failed: read tcp 192.168.39.1:43114->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.119350   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.119688   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.119706   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.119855   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.120006   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.120182   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.120345   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	W0729 10:23:25.125242   12698 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43116->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.125270   12698 retry.go:31] will retry after 291.440373ms: ssh: handshake failed: read tcp 192.168.39.1:43116->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.410301   12698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:23:25.410376   12698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 10:23:25.416576   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 10:23:25.464840   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 10:23:25.467148   12698 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 10:23:25.467167   12698 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 10:23:25.480934   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 10:23:25.496094   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:23:25.516660   12698 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 10:23:25.516687   12698 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 10:23:25.550490   12698 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 10:23:25.550518   12698 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 10:23:25.552060   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 10:23:25.552080   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 10:23:25.552660   12698 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 10:23:25.552678   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 10:23:25.565482   12698 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 10:23:25.565506   12698 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 10:23:25.576825   12698 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 10:23:25.576851   12698 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 10:23:25.588908   12698 node_ready.go:35] waiting up to 6m0s for node "addons-342031" to be "Ready" ...
	I0729 10:23:25.592378   12698 node_ready.go:49] node "addons-342031" has status "Ready":"True"
	I0729 10:23:25.592421   12698 node_ready.go:38] duration metric: took 3.465275ms for node "addons-342031" to be "Ready" ...
	I0729 10:23:25.592433   12698 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:23:25.602574   12698 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:25.664460   12698 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 10:23:25.664481   12698 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 10:23:25.666566   12698 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 10:23:25.666586   12698 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 10:23:25.723005   12698 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 10:23:25.723025   12698 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 10:23:25.729702   12698 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 10:23:25.729719   12698 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 10:23:25.730275   12698 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 10:23:25.730296   12698 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 10:23:25.731938   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 10:23:25.731955   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 10:23:25.734928   12698 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 10:23:25.734950   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 10:23:25.750107   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:23:25.769528   12698 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 10:23:25.769554   12698 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 10:23:25.824578   12698 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 10:23:25.824606   12698 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 10:23:25.826663   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 10:23:25.836224   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 10:23:25.864953   12698 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 10:23:25.864978   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 10:23:25.950334   12698 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 10:23:25.950362   12698 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 10:23:25.952327   12698 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 10:23:25.952351   12698 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 10:23:26.026376   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 10:23:26.056901   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 10:23:26.056925   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 10:23:26.076619   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 10:23:26.092872   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 10:23:26.092899   12698 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 10:23:26.109409   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 10:23:26.155379   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 10:23:26.157925   12698 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 10:23:26.157948   12698 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 10:23:26.244219   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 10:23:26.244250   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 10:23:26.311853   12698 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:23:26.311889   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 10:23:26.393610   12698 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 10:23:26.393630   12698 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 10:23:26.408100   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 10:23:26.408122   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 10:23:26.651564   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:23:26.705981   12698 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 10:23:26.706008   12698 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 10:23:26.710174   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 10:23:26.710191   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 10:23:26.965108   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 10:23:26.965132   12698 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 10:23:27.071664   12698 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 10:23:27.071686   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 10:23:27.283675   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 10:23:27.283699   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 10:23:27.442953   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 10:23:27.616387   12698 pod_ready.go:102] pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace has status "Ready":"False"
	I0729 10:23:27.620969   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 10:23:27.620992   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 10:23:27.682527   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 10:23:27.682553   12698 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 10:23:27.984364   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 10:23:28.099485   12698 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.689070943s)
	I0729 10:23:28.099517   12698 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 10:23:28.607314   12698 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-342031" context rescaled to 1 replicas
	I0729 10:23:29.787337   12698 pod_ready.go:102] pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace has status "Ready":"False"
	I0729 10:23:32.143200   12698 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 10:23:32.143236   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:32.146559   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:32.147055   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:32.147085   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:32.147278   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:32.147503   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:32.147651   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:32.147807   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:32.182879   12698 pod_ready.go:102] pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace has status "Ready":"False"
	I0729 10:23:32.614503   12698 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 10:23:32.616818   12698 pod_ready.go:92] pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:32.616845   12698 pod_ready.go:81] duration metric: took 7.014232912s for pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.616855   12698 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dpx74" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.623581   12698 pod_ready.go:92] pod "coredns-7db6d8ff4d-dpx74" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:32.623610   12698 pod_ready.go:81] duration metric: took 6.747033ms for pod "coredns-7db6d8ff4d-dpx74" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.623622   12698 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.630924   12698 pod_ready.go:92] pod "etcd-addons-342031" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:32.630944   12698 pod_ready.go:81] duration metric: took 7.314368ms for pod "etcd-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.630953   12698 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.644101   12698 pod_ready.go:92] pod "kube-apiserver-addons-342031" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:32.644131   12698 pod_ready.go:81] duration metric: took 13.170911ms for pod "kube-apiserver-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.644147   12698 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.663805   12698 pod_ready.go:92] pod "kube-controller-manager-addons-342031" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:32.663835   12698 pod_ready.go:81] duration metric: took 19.67932ms for pod "kube-controller-manager-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.663848   12698 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xxxfj" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.728469   12698 addons.go:234] Setting addon gcp-auth=true in "addons-342031"
	I0729 10:23:32.728520   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:32.728803   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:32.728832   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:32.743973   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0729 10:23:32.744371   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:32.745252   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:32.745273   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:32.745583   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:32.746043   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:32.746068   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:32.761424   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44019
	I0729 10:23:32.761843   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:32.762388   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:32.762414   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:32.762812   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:32.763051   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:32.764746   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:32.764976   12698 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 10:23:32.765001   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:32.767285   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:32.767686   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:32.767717   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:32.767881   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:32.768064   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:32.768228   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:32.768365   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:33.015459   12698 pod_ready.go:92] pod "kube-proxy-xxxfj" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:33.015494   12698 pod_ready.go:81] duration metric: took 351.637411ms for pod "kube-proxy-xxxfj" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:33.015508   12698 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:33.408100   12698 pod_ready.go:92] pod "kube-scheduler-addons-342031" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:33.408122   12698 pod_ready.go:81] duration metric: took 392.606722ms for pod "kube-scheduler-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:33.408137   12698 pod_ready.go:38] duration metric: took 7.815685471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:23:33.408151   12698 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:23:33.408197   12698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:23:34.049408   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.632795039s)
	I0729 10:23:34.049439   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.584562853s)
	I0729 10:23:34.049473   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049475   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.568516268s)
	I0729 10:23:34.049484   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049496   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049472   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049557   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049581   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.299446941s)
	I0729 10:23:34.049605   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049614   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049536   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.553418805s)
	I0729 10:23:34.049681   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.22299479s)
	I0729 10:23:34.049687   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049695   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049704   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049716   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049830   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.213577349s)
	I0729 10:23:34.049948   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049960   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049543   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050066   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050090   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050106   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050155   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050163   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050171   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050178   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050225   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050244   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050250   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050257   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050265   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050326   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050332   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050326   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.023921952s)
	I0729 10:23:34.050344   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050352   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050356   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050361   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050365   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050399   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.973756437s)
	I0729 10:23:34.050421   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050423   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050429   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050431   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050442   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050449   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050509   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050517   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050693   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050727   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050741   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.941295871s)
	I0729 10:23:34.050765   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050770   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050776   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050781   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050786   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.051086   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.051771   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.051820   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.051840   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.051847   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.051856   12698 addons.go:475] Verifying addon ingress=true in "addons-342031"
	I0729 10:23:34.051964   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.051994   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.052000   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.052008   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.052014   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.053258   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.053288   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.053295   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.053303   12698 addons.go:475] Verifying addon registry=true in "addons-342031"
	I0729 10:23:34.053517   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.898102796s)
	I0729 10:23:34.053541   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.053556   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050749   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.053614   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.053624   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.053632   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.053763   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.402166812s)
	W0729 10:23:34.053786   12698 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 10:23:34.053814   12698 retry.go:31] will retry after 372.256906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 10:23:34.053968   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.053982   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.053990   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.053997   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.054110   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054124   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054261   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.61126683s)
	I0729 10:23:34.054282   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.054298   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.054389   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054417   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054426   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054434   12698 addons.go:475] Verifying addon metrics-server=true in "addons-342031"
	I0729 10:23:34.054478   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054496   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054511   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054532   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054540   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054568   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054601   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054621   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054628   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054636   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.054638   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054641   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054651   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054661   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.054668   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.054667   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054642   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.054652   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054757   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054781   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054789   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054796   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.054803   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.051936   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054908   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054918   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.054926   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.055242   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.055269   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.055275   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.055416   12698 out.go:177] * Verifying ingress addon...
	I0729 10:23:34.055568   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.055612   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.055619   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.055776   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.055825   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.055846   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.055984   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.056017   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.056024   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.056321   12698 out.go:177] * Verifying registry addon...
	I0729 10:23:34.059340   12698 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 10:23:34.059617   12698 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-342031 service yakd-dashboard -n yakd-dashboard
	
	I0729 10:23:34.060282   12698 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 10:23:34.073659   12698 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 10:23:34.073685   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:34.077292   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.077309   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.077640   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.077649   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.077663   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 10:23:34.077747   12698 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0729 10:23:34.083607   12698 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 10:23:34.083624   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:34.088586   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.088601   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.088896   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.088941   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.088952   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.426903   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:23:34.565685   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:34.565812   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:34.988298   12698 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.22329162s)
	I0729 10:23:34.988340   12698 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.580122307s)
	I0729 10:23:34.988371   12698 api_server.go:72] duration metric: took 10.039738841s to wait for apiserver process to appear ...
	I0729 10:23:34.988383   12698 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:23:34.988475   12698 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8443/healthz ...
	I0729 10:23:34.988381   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.003973469s)
	I0729 10:23:34.988646   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.988672   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.988925   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.988927   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.988958   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.988975   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.988984   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.989215   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.989232   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.989245   12698 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-342031"
	I0729 10:23:34.990131   12698 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 10:23:34.990963   12698 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 10:23:34.992639   12698 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:23:34.993195   12698 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 10:23:34.994345   12698 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 10:23:34.994364   12698 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 10:23:34.998412   12698 api_server.go:279] https://192.168.39.224:8443/healthz returned 200:
	ok
	I0729 10:23:34.999584   12698 api_server.go:141] control plane version: v1.30.3
	I0729 10:23:34.999604   12698 api_server.go:131] duration metric: took 11.148651ms to wait for apiserver health ...
	I0729 10:23:34.999612   12698 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 10:23:35.011026   12698 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 10:23:35.011046   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:35.048270   12698 system_pods.go:59] 19 kube-system pods found
	I0729 10:23:35.048301   12698 system_pods.go:61] "coredns-7db6d8ff4d-7p4nt" [bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c] Running
	I0729 10:23:35.048306   12698 system_pods.go:61] "coredns-7db6d8ff4d-dpx74" [756984e7-bcdb-4738-9d14-7a19eef1223d] Running
	I0729 10:23:35.048311   12698 system_pods.go:61] "csi-hostpath-attacher-0" [14d9045e-0ce7-4b4c-8e60-7b879be9ad87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 10:23:35.048316   12698 system_pods.go:61] "csi-hostpath-resizer-0" [37a95774-ce07-4299-994d-c54ded0fa6c1] Pending
	I0729 10:23:35.048322   12698 system_pods.go:61] "csi-hostpathplugin-sls2d" [2c6bd926-1f71-43e5-8c84-5c39a668606c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 10:23:35.048326   12698 system_pods.go:61] "etcd-addons-342031" [14b1d740-0700-442b-90f3-b806012a0848] Running
	I0729 10:23:35.048332   12698 system_pods.go:61] "kube-apiserver-addons-342031" [165f41bf-6de7-4b96-84e5-3a2f2ef072e5] Running
	I0729 10:23:35.048337   12698 system_pods.go:61] "kube-controller-manager-addons-342031" [41ac27d3-0ce0-4622-b164-f20afe162ee7] Running
	I0729 10:23:35.048344   12698 system_pods.go:61] "kube-ingress-dns-minikube" [a8913bd2-d23f-492c-bc4b-dd5175fff394] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 10:23:35.048347   12698 system_pods.go:61] "kube-proxy-xxxfj" [1a170716-f715-4335-95c7-88c60f42a91b] Running
	I0729 10:23:35.048351   12698 system_pods.go:61] "kube-scheduler-addons-342031" [3e16db13-65e8-4ffb-91d5-03f25c7883ad] Running
	I0729 10:23:35.048356   12698 system_pods.go:61] "metrics-server-c59844bb4-xpvk9" [b347f8e7-4e0d-4d6c-98f1-e2325cffef0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 10:23:35.048362   12698 system_pods.go:61] "nvidia-device-plugin-daemonset-hn9w7" [4ec41c4d-a5b9-4145-965a-16a2cc121387] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 10:23:35.048375   12698 system_pods.go:61] "registry-656c9c8d9c-t9mch" [c7896ca2-19fe-4e63-acf0-f820d1e54537] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 10:23:35.048382   12698 system_pods.go:61] "registry-proxy-vvvpt" [4854d6ef-fcb6-430d-aa34-fba27a2e4685] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 10:23:35.048388   12698 system_pods.go:61] "snapshot-controller-745499f584-dnhgq" [fd242e46-a424-46b6-89d2-f9d7d1827554] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:23:35.048394   12698 system_pods.go:61] "snapshot-controller-745499f584-jwdfc" [70e281a2-c081-4518-a5ea-e9d3f25724b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:23:35.048398   12698 system_pods.go:61] "storage-provisioner" [042331d6-ad1c-4aaa-b67e-152bd6e78507] Running
	I0729 10:23:35.048405   12698 system_pods.go:61] "tiller-deploy-6677d64bcd-j4zgl" [622a71ad-23e4-4ae3-bdce-fccd9e31b58c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 10:23:35.048411   12698 system_pods.go:74] duration metric: took 48.794213ms to wait for pod list to return data ...
	I0729 10:23:35.048421   12698 default_sa.go:34] waiting for default service account to be created ...
	I0729 10:23:35.056348   12698 default_sa.go:45] found service account: "default"
	I0729 10:23:35.056374   12698 default_sa.go:55] duration metric: took 7.947228ms for default service account to be created ...
	I0729 10:23:35.056382   12698 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 10:23:35.070856   12698 system_pods.go:86] 19 kube-system pods found
	I0729 10:23:35.070883   12698 system_pods.go:89] "coredns-7db6d8ff4d-7p4nt" [bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c] Running
	I0729 10:23:35.070889   12698 system_pods.go:89] "coredns-7db6d8ff4d-dpx74" [756984e7-bcdb-4738-9d14-7a19eef1223d] Running
	I0729 10:23:35.070896   12698 system_pods.go:89] "csi-hostpath-attacher-0" [14d9045e-0ce7-4b4c-8e60-7b879be9ad87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 10:23:35.070902   12698 system_pods.go:89] "csi-hostpath-resizer-0" [37a95774-ce07-4299-994d-c54ded0fa6c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 10:23:35.070911   12698 system_pods.go:89] "csi-hostpathplugin-sls2d" [2c6bd926-1f71-43e5-8c84-5c39a668606c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 10:23:35.070916   12698 system_pods.go:89] "etcd-addons-342031" [14b1d740-0700-442b-90f3-b806012a0848] Running
	I0729 10:23:35.070921   12698 system_pods.go:89] "kube-apiserver-addons-342031" [165f41bf-6de7-4b96-84e5-3a2f2ef072e5] Running
	I0729 10:23:35.070925   12698 system_pods.go:89] "kube-controller-manager-addons-342031" [41ac27d3-0ce0-4622-b164-f20afe162ee7] Running
	I0729 10:23:35.070931   12698 system_pods.go:89] "kube-ingress-dns-minikube" [a8913bd2-d23f-492c-bc4b-dd5175fff394] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 10:23:35.070935   12698 system_pods.go:89] "kube-proxy-xxxfj" [1a170716-f715-4335-95c7-88c60f42a91b] Running
	I0729 10:23:35.070939   12698 system_pods.go:89] "kube-scheduler-addons-342031" [3e16db13-65e8-4ffb-91d5-03f25c7883ad] Running
	I0729 10:23:35.070948   12698 system_pods.go:89] "metrics-server-c59844bb4-xpvk9" [b347f8e7-4e0d-4d6c-98f1-e2325cffef0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 10:23:35.070956   12698 system_pods.go:89] "nvidia-device-plugin-daemonset-hn9w7" [4ec41c4d-a5b9-4145-965a-16a2cc121387] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 10:23:35.070963   12698 system_pods.go:89] "registry-656c9c8d9c-t9mch" [c7896ca2-19fe-4e63-acf0-f820d1e54537] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 10:23:35.070968   12698 system_pods.go:89] "registry-proxy-vvvpt" [4854d6ef-fcb6-430d-aa34-fba27a2e4685] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 10:23:35.070975   12698 system_pods.go:89] "snapshot-controller-745499f584-dnhgq" [fd242e46-a424-46b6-89d2-f9d7d1827554] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:23:35.070981   12698 system_pods.go:89] "snapshot-controller-745499f584-jwdfc" [70e281a2-c081-4518-a5ea-e9d3f25724b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:23:35.070986   12698 system_pods.go:89] "storage-provisioner" [042331d6-ad1c-4aaa-b67e-152bd6e78507] Running
	I0729 10:23:35.070992   12698 system_pods.go:89] "tiller-deploy-6677d64bcd-j4zgl" [622a71ad-23e4-4ae3-bdce-fccd9e31b58c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 10:23:35.071001   12698 system_pods.go:126] duration metric: took 14.613711ms to wait for k8s-apps to be running ...
	I0729 10:23:35.071008   12698 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 10:23:35.071060   12698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:23:35.076588   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:35.076618   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:35.173173   12698 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 10:23:35.173195   12698 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 10:23:35.245428   12698 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 10:23:35.245456   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 10:23:35.357975   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 10:23:35.500259   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:35.563984   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:35.568626   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:35.999529   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:36.064061   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:36.066596   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:36.501168   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:36.565147   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:36.565555   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:36.704557   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.277592908s)
	I0729 10:23:36.704617   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:36.704624   12698 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.633539866s)
	I0729 10:23:36.704652   12698 system_svc.go:56] duration metric: took 1.633640078s WaitForService to wait for kubelet
	I0729 10:23:36.704676   12698 kubeadm.go:582] duration metric: took 11.756032442s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:23:36.704704   12698 node_conditions.go:102] verifying NodePressure condition ...
	I0729 10:23:36.704635   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:36.705135   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:36.705160   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:36.705171   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:36.705180   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:36.705193   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:36.705420   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:36.705455   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:36.705471   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:36.714958   12698 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:23:36.714985   12698 node_conditions.go:123] node cpu capacity is 2
	I0729 10:23:36.714995   12698 node_conditions.go:105] duration metric: took 10.282614ms to run NodePressure ...
	I0729 10:23:36.715005   12698 start.go:241] waiting for startup goroutines ...
	I0729 10:23:36.941652   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.583643613s)
	I0729 10:23:36.941699   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:36.941716   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:36.941985   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:36.942019   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:36.942027   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:36.942042   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:36.942049   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:36.942262   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:36.942313   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:36.942339   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:36.944327   12698 addons.go:475] Verifying addon gcp-auth=true in "addons-342031"
	I0729 10:23:36.946123   12698 out.go:177] * Verifying gcp-auth addon...
	I0729 10:23:36.948158   12698 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 10:23:36.984328   12698 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 10:23:36.984359   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:36.999759   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:37.063688   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:37.067414   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:37.452016   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:37.499529   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:37.617584   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:37.618083   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:37.952505   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:38.000149   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:38.065561   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:38.066692   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:38.452427   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:38.499671   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:38.563454   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:38.566038   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:38.952702   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:38.999037   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:39.064635   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:39.065565   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:39.452608   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:39.500471   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:39.565506   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:39.567479   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:39.951648   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:39.998487   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:40.063273   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:40.065959   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:40.453339   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:40.500094   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:40.565486   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:40.566435   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:40.952116   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:40.999715   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:41.065903   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:41.065920   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:41.600915   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:41.601095   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:41.601201   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:41.606414   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:41.952158   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:42.000195   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:42.064522   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:42.067068   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:42.452993   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:42.499455   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:42.565019   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:42.567383   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:42.952180   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:42.999994   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:43.066196   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:43.066829   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:43.452173   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:43.500272   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:43.564141   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:43.566066   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:43.952644   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:43.998570   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:44.063688   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:44.066659   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:44.452192   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:44.499823   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:44.564500   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:44.566654   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:44.952093   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:45.000191   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:45.064442   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:45.064951   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:45.452221   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:45.500323   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:45.563600   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:45.565236   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:45.951740   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:45.999031   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:46.064985   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:46.066003   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:46.453013   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:46.499905   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:46.565154   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:46.567169   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:46.952342   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:46.999979   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:47.065656   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:47.066458   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:47.451632   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:47.498937   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:47.565641   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:47.571491   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:47.952800   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:47.999527   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:48.065167   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:48.065336   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:48.452082   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:48.498940   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:48.564883   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:48.565179   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:48.951995   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:48.999568   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:49.064297   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:49.066817   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:49.452518   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:49.499469   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:49.563988   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:49.565540   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:49.952340   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:49.999755   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:50.065104   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:50.065536   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:50.452707   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:50.499507   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:50.564635   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:50.565699   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:50.952444   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:51.000668   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:51.064015   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:51.065459   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:51.452015   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:51.500432   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:51.563626   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:51.565038   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:51.951800   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:52.016543   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:52.073463   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:52.078712   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:52.452345   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:52.500189   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:52.564799   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:52.566172   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:52.951561   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:52.999552   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:53.063760   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:53.065469   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:53.451642   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:53.499066   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:53.577939   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:53.579136   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:54.112125   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:54.112255   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:54.112862   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:54.117174   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:54.451655   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:54.498313   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:54.565450   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:54.568339   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:54.952449   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:54.999294   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:55.067828   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:55.068536   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:55.452200   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:55.499442   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:55.563820   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:55.565499   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:55.951726   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:55.998611   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:56.064375   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:56.065427   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:56.451833   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:56.499000   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:56.563202   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:56.567351   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:56.952445   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:56.999222   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:57.063395   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:57.067543   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:57.451635   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:57.505014   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:57.573323   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:57.574301   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:57.951759   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:57.998798   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:58.071158   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:58.077478   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:58.453822   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:58.500771   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:58.569250   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:58.569759   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:58.951617   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:58.999045   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:59.066210   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:59.066788   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:59.451222   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:59.499282   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:59.565517   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:59.566613   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:59.953536   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:00.000754   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:00.063348   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:00.064558   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:00.452382   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:00.502387   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:00.563496   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:00.566531   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:00.951888   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:01.000600   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:01.064575   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:01.067404   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:01.451960   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:01.499288   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:01.564948   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:01.564956   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:01.951594   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:01.998303   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:02.063172   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:02.065790   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:02.452827   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:02.499208   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:02.563642   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:02.566624   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:02.952394   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:02.999826   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:03.064192   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:03.066282   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:03.452879   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:03.499960   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:03.564556   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:03.565719   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:03.952552   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:03.998684   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:04.063935   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:04.065496   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:04.462092   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:04.498998   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:04.564610   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:04.564715   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:04.951874   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:04.999392   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:05.065147   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:05.065575   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:05.452484   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:05.498653   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:05.564642   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:05.566583   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:05.952295   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:05.999559   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:06.064522   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:06.067275   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:06.452156   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:06.499838   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:06.564126   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:06.565959   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:06.951880   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:07.001277   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:07.063905   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:07.064994   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:07.452551   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:07.498590   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:07.564286   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:07.565398   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:07.951898   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:07.998995   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:08.065743   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:08.065998   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:08.451965   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:08.504204   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:08.578339   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:08.581885   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:08.952530   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:09.000249   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:09.064577   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:09.065458   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:09.452144   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:09.499261   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:09.567594   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:09.569376   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:09.952220   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:10.000372   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:10.065931   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:10.067176   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:10.451907   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:10.498976   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:10.565948   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:10.567184   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:10.952223   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:10.999311   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:11.066344   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:11.066471   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:11.451372   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:11.499258   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:11.564309   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:11.566879   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:11.954976   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:11.999218   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:12.065366   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:12.065859   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:12.452071   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:12.501676   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:12.564398   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:12.567702   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:12.952031   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:12.998893   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:13.067163   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:13.069555   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:13.451456   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:13.499186   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:13.563728   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:13.566664   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:13.952559   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:13.998539   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:14.063772   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:14.064785   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:14.452879   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:14.499118   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:14.563653   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:14.564863   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:14.953311   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:15.007426   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:15.064503   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:15.066250   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:15.453280   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:15.499464   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:15.563677   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:15.564973   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:15.952357   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:15.999680   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:16.067307   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:16.068742   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:16.452464   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:16.498740   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:16.564769   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:16.564915   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:16.952450   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:17.000043   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:17.065031   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:17.065372   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:17.451584   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:17.498643   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:17.564549   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:17.566103   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:17.952547   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:17.998827   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:18.063941   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:18.065203   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:18.452041   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:18.499376   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:18.564040   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:18.566358   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:18.952182   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:18.999907   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:19.064998   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:19.066671   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:19.452250   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:19.499873   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:19.569963   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:19.582200   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:19.952408   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:19.999569   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:20.063458   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:20.066407   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:20.726177   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:20.728278   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:20.731000   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:20.732932   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:20.952392   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:21.001765   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:21.064088   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:21.064824   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:21.452914   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:21.499830   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:21.563599   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:21.565795   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:21.954833   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:21.998351   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:22.064901   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:22.065560   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:22.451814   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:22.498621   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:22.564088   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:22.567339   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:22.951869   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:23.001660   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:23.065495   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:23.065662   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:23.451797   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:23.498767   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:23.563583   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:23.565484   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:23.952034   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:23.998752   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:24.063708   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:24.064636   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:24.452384   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:24.500618   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:24.563656   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:24.567700   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:24.951933   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:24.999389   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:25.064269   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:25.065815   12698 kapi.go:107] duration metric: took 51.005539544s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 10:24:25.452779   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:25.499475   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:25.565331   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:25.952546   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:25.998349   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:26.063767   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:26.451683   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:26.499339   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:26.563337   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:26.951363   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:27.021757   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:27.071750   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:27.452193   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:27.501424   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:27.564259   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:27.952619   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:27.998963   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:28.076451   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:28.453142   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:28.499441   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:28.563217   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:28.952232   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:28.999084   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:29.064479   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:29.451426   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:29.499310   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:29.563734   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:29.956228   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:29.998993   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:30.064633   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:30.452772   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:30.499727   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:30.569479   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:30.952500   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:31.001564   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:31.063775   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:31.451762   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:31.500144   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:31.569891   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:31.953386   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:32.001601   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:32.064090   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:32.452278   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:32.499534   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:32.564215   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:32.954747   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:32.998443   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:33.063476   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:33.451640   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:33.499813   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:33.563473   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:33.952725   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:33.998798   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:34.064867   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:34.451261   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:34.499470   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:34.564234   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:34.952067   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:34.999183   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:35.064073   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:35.453150   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:35.500025   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:35.570321   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:35.952055   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:36.001627   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:36.063666   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:36.452143   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:36.499088   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:36.564906   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:36.953864   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:36.999311   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:37.064497   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:37.452876   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:37.500255   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:37.563643   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:37.951826   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:38.001460   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:38.074550   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:38.452773   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:38.499741   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:38.563964   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:38.954669   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:38.998002   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:39.064070   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:39.675869   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:39.676935   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:39.679935   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:39.953276   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:40.007884   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:40.074086   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:40.452297   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:40.504026   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:40.564417   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:40.952674   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:40.999006   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:41.064136   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:41.452104   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:41.499252   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:41.564210   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:41.952573   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:42.000124   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:42.064485   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:42.453090   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:42.500119   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:42.565465   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:42.952736   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:42.999215   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:43.063572   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:43.451431   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:43.498530   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:43.564156   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:43.959549   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:43.999654   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:44.064673   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:44.452078   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:44.501626   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:44.565560   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:44.953675   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:45.003407   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:45.063989   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:45.452060   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:45.498565   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:45.564096   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:45.952371   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:45.999590   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:46.064405   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:46.457352   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:46.499698   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:46.566785   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:47.046201   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:47.046833   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:47.065424   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:47.452520   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:47.499254   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:47.563329   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:47.952248   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:47.999316   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:48.063529   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:48.451292   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:48.499520   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:48.563464   12698 kapi.go:107] duration metric: took 1m14.504123177s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 10:24:48.951582   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:48.999039   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:49.451972   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:49.500106   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:49.952271   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:50.000194   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:50.453769   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:50.500630   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:50.952649   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:50.998605   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:51.451390   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:51.500161   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:51.951319   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:51.999328   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:52.457663   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:52.498444   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:52.952827   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:52.999468   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:53.452778   12698 kapi.go:107] duration metric: took 1m16.504620091s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 10:24:53.454373   12698 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-342031 cluster.
	I0729 10:24:53.455768   12698 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 10:24:53.457092   12698 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 10:24:53.499033   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:53.999761   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:54.499986   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:54.999680   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:55.499953   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:56.002851   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:56.502598   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:56.999758   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:57.499588   12698 kapi.go:107] duration metric: took 1m22.50638969s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 10:24:57.501611   12698 out.go:177] * Enabled addons: helm-tiller, metrics-server, nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0729 10:24:57.503023   12698 addons.go:510] duration metric: took 1m32.554350492s for enable addons: enabled=[helm-tiller metrics-server nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0729 10:24:57.503070   12698 start.go:246] waiting for cluster config update ...
	I0729 10:24:57.503091   12698 start.go:255] writing updated cluster config ...
	I0729 10:24:57.503363   12698 ssh_runner.go:195] Run: rm -f paused
	I0729 10:24:57.571903   12698 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 10:24:57.573560   12698 out.go:177] * Done! kubectl is now configured to use "addons-342031" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.294793758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722248898294765163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589533,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffcba136-b110-475a-96e7-bbd17d4fe7c0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.295324545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8a222da-ba42-4478-82f2-5dc673bdde88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.295380683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8a222da-ba42-4478-82f2-5dc673bdde88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.295786781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:363f70a97eb9283415a8bef3ca142364d0fed767451822e183cf12167396c2f3,PodSandboxId:6e31ec3673380c4ea87bb28a0554ab05502f1e69e334aa12fd5617262d826fb5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722248891193219804,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-9tvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e44723-d4c5-4175-ba9d-52cc5bf7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 28bad8c3,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774561bc7f43d2b846f5a26ce56a79d16e4380ba1d072dfc07dbfc9988951499,PodSandboxId:443c3d1fa171154089d6a79196f3a4a0c23cc287fd12ef9684c89a694c194783,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722248748958202872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2af2b59-c59f-4341-a2d3-88a65f799b1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 4b5c6f8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13d8d7564bcf3beb1f7d3536172bdce636bb39c681e8d98755a58067cb39a2,PodSandboxId:2caaae57648c154553576912f8a1a41c7006b108eb5b4d4733b6ad9096268810,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722248737818236102,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-vd7js,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7d782200-b333-4563-9bc5-012f43886495,},Annotations:map[string]string{io.kubernetes.container.hash: 848e50a3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0856d93df5fcf9fc505cdd5e9bf0eba0403ddcf2c48b601b40f709b26c1d7ff0,PodSandboxId:2db77a19fbd5db1e083599871468850327f59b6522e2d09bf175a0c8c76d2bbc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722248701566267530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92313c3e-4d9c-446b-9eba-48bd5781c42a,},Annotations:map[string]string{io.kubernetes.container.hash: 13cf5796,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db444daa29ef8eff8416f506ef3056e258a3455171f0a13203591ceb6af8462,PodSandboxId:b90f90fd9780d6dbf1d4086b7924853e4de7596ea2749d96a5ed0a01ad46e129,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722248667589778957,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2n787,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 20d03050-1a5f-4ffa-86b5-7dbc81463b05,},Annotations:map[string]string{io.kubernetes.container.hash: 37889472,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b34a25af319ebefaa689aa8887452cc2a39add767701722a8d9403d79a6763,PodSandboxId:39eeb4c6bc65327bf4b3ccf9558b00b7f1112ad6edb33b5f1a0cb18548637565,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722248640283925
160,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xpvk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b347f8e7-4e0d-4d6c-98f1-e2325cffef0e,},Annotations:map[string]string{io.kubernetes.container.hash: e219e715,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac166beb18a8bae38259ddef3712206a2bfee3cd8d4af1a79fc3569021ed8d,PodSandboxId:c15c19ee977c365277b59d5a1056e978c62f066b6a54f6fafddbab4fbf5678e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722248611446925625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042331d6-ad1c-4aaa-b67e-152bd6e78507,},Annotations:map[string]string{io.kubernetes.container.hash: 1c104067,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214389dc390da290fe0be44304d2e4275de79963a4347c2da192aec4a043cc88,PodSandboxId:26e3990cc3f2282c2acd5839ddbdc0afceb1618249fabcb0a51122a8c8ce42a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722248609618737147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7p4nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9f966ded,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6a2c9fa4cd5b1c22709372448330cee907903e60cbb1304fce9e5be57abe23,PodSandboxId:aa07550982de9a66384147147b14acef0b54eada1de00c14c08d2e23252dfc40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722248607009896480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxxfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a170716-f715-4335-95c7-88c60f42a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 526e7c73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a975bababdfcab83d90b0d23d59977dfb34bedf202f7efbeae2397ee24cfda9,PodSandboxId:8ce6c1a682d5cade6b6719285ad124fc4fb0d79a622388110734040114988f4f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487
248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722248585958740064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9518d19b725d3da239da5734075da82a,},Annotations:map[string]string{io.kubernetes.container.hash: a7bb2e4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d87bbdda87a505f694af02bd69a0acedf24df447acc92bead4a95f26817026a,PodSandboxId:a3eeabf960faedb8ce22b13aa24b81ffef325af6b7533c2435098e6ae2d7d631,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722248585912328477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a36bc1144e5af022ac844378fcd3642e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f49233f8bdc242e81989422ff49ab64d41bea91e75a5e902c38498c8aa34291,PodSandboxId:4c00e4633b495221bf8e40cfd00c340fe1a68e913a1ffdfffce8b0393c9789cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722248585908238289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ebbb59e746d97a13eaae9907ddaef2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731,PodSandboxId:519bc6c1575ee50ea4d1021a124af96bdb9775fcc39719b307f76cf96000b4ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722248585840188170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 411a1f9fbbb593cb4784c6aa4055be52,},Annotations:map[string]string{io.kubernetes.container.hash: 700e155f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8a222da-ba42-4478-82f2-5dc673bdde88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.335281798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d751bfd2-74e0-469a-a14d-a3af8e526d40 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.335366892Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d751bfd2-74e0-469a-a14d-a3af8e526d40 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.336922557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9968a54-d089-4d91-b0dc-e67fb70fccb1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.338303365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722248898338264923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589533,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9968a54-d089-4d91-b0dc-e67fb70fccb1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.338861417Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f7c06ba-efde-4764-b47e-abdcd22ce429 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.338917552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f7c06ba-efde-4764-b47e-abdcd22ce429 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.339351248Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:363f70a97eb9283415a8bef3ca142364d0fed767451822e183cf12167396c2f3,PodSandboxId:6e31ec3673380c4ea87bb28a0554ab05502f1e69e334aa12fd5617262d826fb5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722248891193219804,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-9tvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e44723-d4c5-4175-ba9d-52cc5bf7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 28bad8c3,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774561bc7f43d2b846f5a26ce56a79d16e4380ba1d072dfc07dbfc9988951499,PodSandboxId:443c3d1fa171154089d6a79196f3a4a0c23cc287fd12ef9684c89a694c194783,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722248748958202872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2af2b59-c59f-4341-a2d3-88a65f799b1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 4b5c6f8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13d8d7564bcf3beb1f7d3536172bdce636bb39c681e8d98755a58067cb39a2,PodSandboxId:2caaae57648c154553576912f8a1a41c7006b108eb5b4d4733b6ad9096268810,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722248737818236102,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-vd7js,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7d782200-b333-4563-9bc5-012f43886495,},Annotations:map[string]string{io.kubernetes.container.hash: 848e50a3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0856d93df5fcf9fc505cdd5e9bf0eba0403ddcf2c48b601b40f709b26c1d7ff0,PodSandboxId:2db77a19fbd5db1e083599871468850327f59b6522e2d09bf175a0c8c76d2bbc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722248701566267530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92313c3e-4d9c-446b-9eba-48bd5781c42a,},Annotations:map[string]string{io.kubernetes.container.hash: 13cf5796,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db444daa29ef8eff8416f506ef3056e258a3455171f0a13203591ceb6af8462,PodSandboxId:b90f90fd9780d6dbf1d4086b7924853e4de7596ea2749d96a5ed0a01ad46e129,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722248667589778957,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2n787,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 20d03050-1a5f-4ffa-86b5-7dbc81463b05,},Annotations:map[string]string{io.kubernetes.container.hash: 37889472,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b34a25af319ebefaa689aa8887452cc2a39add767701722a8d9403d79a6763,PodSandboxId:39eeb4c6bc65327bf4b3ccf9558b00b7f1112ad6edb33b5f1a0cb18548637565,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722248640283925
160,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xpvk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b347f8e7-4e0d-4d6c-98f1-e2325cffef0e,},Annotations:map[string]string{io.kubernetes.container.hash: e219e715,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac166beb18a8bae38259ddef3712206a2bfee3cd8d4af1a79fc3569021ed8d,PodSandboxId:c15c19ee977c365277b59d5a1056e978c62f066b6a54f6fafddbab4fbf5678e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722248611446925625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042331d6-ad1c-4aaa-b67e-152bd6e78507,},Annotations:map[string]string{io.kubernetes.container.hash: 1c104067,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214389dc390da290fe0be44304d2e4275de79963a4347c2da192aec4a043cc88,PodSandboxId:26e3990cc3f2282c2acd5839ddbdc0afceb1618249fabcb0a51122a8c8ce42a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722248609618737147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7p4nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9f966ded,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6a2c9fa4cd5b1c22709372448330cee907903e60cbb1304fce9e5be57abe23,PodSandboxId:aa07550982de9a66384147147b14acef0b54eada1de00c14c08d2e23252dfc40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722248607009896480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxxfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a170716-f715-4335-95c7-88c60f42a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 526e7c73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a975bababdfcab83d90b0d23d59977dfb34bedf202f7efbeae2397ee24cfda9,PodSandboxId:8ce6c1a682d5cade6b6719285ad124fc4fb0d79a622388110734040114988f4f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487
248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722248585958740064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9518d19b725d3da239da5734075da82a,},Annotations:map[string]string{io.kubernetes.container.hash: a7bb2e4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d87bbdda87a505f694af02bd69a0acedf24df447acc92bead4a95f26817026a,PodSandboxId:a3eeabf960faedb8ce22b13aa24b81ffef325af6b7533c2435098e6ae2d7d631,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722248585912328477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a36bc1144e5af022ac844378fcd3642e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f49233f8bdc242e81989422ff49ab64d41bea91e75a5e902c38498c8aa34291,PodSandboxId:4c00e4633b495221bf8e40cfd00c340fe1a68e913a1ffdfffce8b0393c9789cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722248585908238289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ebbb59e746d97a13eaae9907ddaef2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731,PodSandboxId:519bc6c1575ee50ea4d1021a124af96bdb9775fcc39719b307f76cf96000b4ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722248585840188170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 411a1f9fbbb593cb4784c6aa4055be52,},Annotations:map[string]string{io.kubernetes.container.hash: 700e155f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f7c06ba-efde-4764-b47e-abdcd22ce429 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.384285891Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa3fcbeb-aa40-40a7-9972-d0b3d928d2aa name=/runtime.v1.RuntimeService/Version
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.384388745Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa3fcbeb-aa40-40a7-9972-d0b3d928d2aa name=/runtime.v1.RuntimeService/Version
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.386253237Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7cf4fa2-2314-4996-88e9-58d3617372b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.387463500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722248898387439091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589533,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7cf4fa2-2314-4996-88e9-58d3617372b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.388221130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2609b20-829f-4c50-b6b3-56009a1bee66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.388277228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2609b20-829f-4c50-b6b3-56009a1bee66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.388820005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:363f70a97eb9283415a8bef3ca142364d0fed767451822e183cf12167396c2f3,PodSandboxId:6e31ec3673380c4ea87bb28a0554ab05502f1e69e334aa12fd5617262d826fb5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722248891193219804,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-9tvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e44723-d4c5-4175-ba9d-52cc5bf7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 28bad8c3,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774561bc7f43d2b846f5a26ce56a79d16e4380ba1d072dfc07dbfc9988951499,PodSandboxId:443c3d1fa171154089d6a79196f3a4a0c23cc287fd12ef9684c89a694c194783,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722248748958202872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2af2b59-c59f-4341-a2d3-88a65f799b1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 4b5c6f8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13d8d7564bcf3beb1f7d3536172bdce636bb39c681e8d98755a58067cb39a2,PodSandboxId:2caaae57648c154553576912f8a1a41c7006b108eb5b4d4733b6ad9096268810,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722248737818236102,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-vd7js,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7d782200-b333-4563-9bc5-012f43886495,},Annotations:map[string]string{io.kubernetes.container.hash: 848e50a3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0856d93df5fcf9fc505cdd5e9bf0eba0403ddcf2c48b601b40f709b26c1d7ff0,PodSandboxId:2db77a19fbd5db1e083599871468850327f59b6522e2d09bf175a0c8c76d2bbc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722248701566267530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92313c3e-4d9c-446b-9eba-48bd5781c42a,},Annotations:map[string]string{io.kubernetes.container.hash: 13cf5796,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db444daa29ef8eff8416f506ef3056e258a3455171f0a13203591ceb6af8462,PodSandboxId:b90f90fd9780d6dbf1d4086b7924853e4de7596ea2749d96a5ed0a01ad46e129,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722248667589778957,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2n787,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 20d03050-1a5f-4ffa-86b5-7dbc81463b05,},Annotations:map[string]string{io.kubernetes.container.hash: 37889472,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b34a25af319ebefaa689aa8887452cc2a39add767701722a8d9403d79a6763,PodSandboxId:39eeb4c6bc65327bf4b3ccf9558b00b7f1112ad6edb33b5f1a0cb18548637565,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722248640283925
160,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xpvk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b347f8e7-4e0d-4d6c-98f1-e2325cffef0e,},Annotations:map[string]string{io.kubernetes.container.hash: e219e715,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac166beb18a8bae38259ddef3712206a2bfee3cd8d4af1a79fc3569021ed8d,PodSandboxId:c15c19ee977c365277b59d5a1056e978c62f066b6a54f6fafddbab4fbf5678e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722248611446925625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042331d6-ad1c-4aaa-b67e-152bd6e78507,},Annotations:map[string]string{io.kubernetes.container.hash: 1c104067,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214389dc390da290fe0be44304d2e4275de79963a4347c2da192aec4a043cc88,PodSandboxId:26e3990cc3f2282c2acd5839ddbdc0afceb1618249fabcb0a51122a8c8ce42a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722248609618737147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7p4nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9f966ded,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6a2c9fa4cd5b1c22709372448330cee907903e60cbb1304fce9e5be57abe23,PodSandboxId:aa07550982de9a66384147147b14acef0b54eada1de00c14c08d2e23252dfc40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722248607009896480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxxfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a170716-f715-4335-95c7-88c60f42a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 526e7c73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a975bababdfcab83d90b0d23d59977dfb34bedf202f7efbeae2397ee24cfda9,PodSandboxId:8ce6c1a682d5cade6b6719285ad124fc4fb0d79a622388110734040114988f4f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487
248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722248585958740064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9518d19b725d3da239da5734075da82a,},Annotations:map[string]string{io.kubernetes.container.hash: a7bb2e4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d87bbdda87a505f694af02bd69a0acedf24df447acc92bead4a95f26817026a,PodSandboxId:a3eeabf960faedb8ce22b13aa24b81ffef325af6b7533c2435098e6ae2d7d631,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722248585912328477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a36bc1144e5af022ac844378fcd3642e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f49233f8bdc242e81989422ff49ab64d41bea91e75a5e902c38498c8aa34291,PodSandboxId:4c00e4633b495221bf8e40cfd00c340fe1a68e913a1ffdfffce8b0393c9789cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722248585908238289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ebbb59e746d97a13eaae9907ddaef2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731,PodSandboxId:519bc6c1575ee50ea4d1021a124af96bdb9775fcc39719b307f76cf96000b4ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722248585840188170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 411a1f9fbbb593cb4784c6aa4055be52,},Annotations:map[string]string{io.kubernetes.container.hash: 700e155f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2609b20-829f-4c50-b6b3-56009a1bee66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.425719657Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e909c77f-ad54-4a61-83e8-43d416ff201c name=/runtime.v1.RuntimeService/Version
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.425798519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e909c77f-ad54-4a61-83e8-43d416ff201c name=/runtime.v1.RuntimeService/Version
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.427259384Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65f677ef-a1c6-4887-a6bb-9a9dadb2d75f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.428702465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722248898428675342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589533,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65f677ef-a1c6-4887-a6bb-9a9dadb2d75f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.429293663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d85fa91-dab3-4d21-9789-a433810ac119 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.429354396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d85fa91-dab3-4d21-9789-a433810ac119 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:28:18 addons-342031 crio[678]: time="2024-07-29 10:28:18.429703276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:363f70a97eb9283415a8bef3ca142364d0fed767451822e183cf12167396c2f3,PodSandboxId:6e31ec3673380c4ea87bb28a0554ab05502f1e69e334aa12fd5617262d826fb5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722248891193219804,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-9tvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e44723-d4c5-4175-ba9d-52cc5bf7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 28bad8c3,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774561bc7f43d2b846f5a26ce56a79d16e4380ba1d072dfc07dbfc9988951499,PodSandboxId:443c3d1fa171154089d6a79196f3a4a0c23cc287fd12ef9684c89a694c194783,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722248748958202872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2af2b59-c59f-4341-a2d3-88a65f799b1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 4b5c6f8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13d8d7564bcf3beb1f7d3536172bdce636bb39c681e8d98755a58067cb39a2,PodSandboxId:2caaae57648c154553576912f8a1a41c7006b108eb5b4d4733b6ad9096268810,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722248737818236102,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-vd7js,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7d782200-b333-4563-9bc5-012f43886495,},Annotations:map[string]string{io.kubernetes.container.hash: 848e50a3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0856d93df5fcf9fc505cdd5e9bf0eba0403ddcf2c48b601b40f709b26c1d7ff0,PodSandboxId:2db77a19fbd5db1e083599871468850327f59b6522e2d09bf175a0c8c76d2bbc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722248701566267530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92313c3e-4d9c-446b-9eba-48bd5781c42a,},Annotations:map[string]string{io.kubernetes.container.hash: 13cf5796,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db444daa29ef8eff8416f506ef3056e258a3455171f0a13203591ceb6af8462,PodSandboxId:b90f90fd9780d6dbf1d4086b7924853e4de7596ea2749d96a5ed0a01ad46e129,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722248667589778957,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2n787,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 20d03050-1a5f-4ffa-86b5-7dbc81463b05,},Annotations:map[string]string{io.kubernetes.container.hash: 37889472,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b34a25af319ebefaa689aa8887452cc2a39add767701722a8d9403d79a6763,PodSandboxId:39eeb4c6bc65327bf4b3ccf9558b00b7f1112ad6edb33b5f1a0cb18548637565,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722248640283925
160,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xpvk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b347f8e7-4e0d-4d6c-98f1-e2325cffef0e,},Annotations:map[string]string{io.kubernetes.container.hash: e219e715,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac166beb18a8bae38259ddef3712206a2bfee3cd8d4af1a79fc3569021ed8d,PodSandboxId:c15c19ee977c365277b59d5a1056e978c62f066b6a54f6fafddbab4fbf5678e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722248611446925625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042331d6-ad1c-4aaa-b67e-152bd6e78507,},Annotations:map[string]string{io.kubernetes.container.hash: 1c104067,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214389dc390da290fe0be44304d2e4275de79963a4347c2da192aec4a043cc88,PodSandboxId:26e3990cc3f2282c2acd5839ddbdc0afceb1618249fabcb0a51122a8c8ce42a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722248609618737147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7p4nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9f966ded,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6a2c9fa4cd5b1c22709372448330cee907903e60cbb1304fce9e5be57abe23,PodSandboxId:aa07550982de9a66384147147b14acef0b54eada1de00c14c08d2e23252dfc40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722248607009896480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxxfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a170716-f715-4335-95c7-88c60f42a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 526e7c73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a975bababdfcab83d90b0d23d59977dfb34bedf202f7efbeae2397ee24cfda9,PodSandboxId:8ce6c1a682d5cade6b6719285ad124fc4fb0d79a622388110734040114988f4f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487
248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722248585958740064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9518d19b725d3da239da5734075da82a,},Annotations:map[string]string{io.kubernetes.container.hash: a7bb2e4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d87bbdda87a505f694af02bd69a0acedf24df447acc92bead4a95f26817026a,PodSandboxId:a3eeabf960faedb8ce22b13aa24b81ffef325af6b7533c2435098e6ae2d7d631,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722248585912328477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a36bc1144e5af022ac844378fcd3642e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f49233f8bdc242e81989422ff49ab64d41bea91e75a5e902c38498c8aa34291,PodSandboxId:4c00e4633b495221bf8e40cfd00c340fe1a68e913a1ffdfffce8b0393c9789cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722248585908238289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ebbb59e746d97a13eaae9907ddaef2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731,PodSandboxId:519bc6c1575ee50ea4d1021a124af96bdb9775fcc39719b307f76cf96000b4ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722248585840188170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 411a1f9fbbb593cb4784c6aa4055be52,},Annotations:map[string]string{io.kubernetes.container.hash: 700e155f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d85fa91-dab3-4d21-9789-a433810ac119 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	363f70a97eb92       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   7 seconds ago       Running             hello-world-app           0                   6e31ec3673380       hello-world-app-6778b5fc9f-9tvlv
	774561bc7f43d       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         2 minutes ago       Running             nginx                     0                   443c3d1fa1711       nginx
	3a13d8d7564bc       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   2 minutes ago       Running             headlamp                  0                   2caaae57648c1       headlamp-7867546754-vd7js
	0856d93df5fcf       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   2db77a19fbd5d       busybox
	9db444daa29ef       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        3 minutes ago       Running             local-path-provisioner    0                   b90f90fd9780d       local-path-provisioner-8d985888d-2n787
	c9b34a25af319       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   4 minutes ago       Running             metrics-server            0                   39eeb4c6bc653       metrics-server-c59844bb4-xpvk9
	49ac166beb18a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        4 minutes ago       Running             storage-provisioner       0                   c15c19ee977c3       storage-provisioner
	214389dc390da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        4 minutes ago       Running             coredns                   0                   26e3990cc3f22       coredns-7db6d8ff4d-7p4nt
	6a6a2c9fa4cd5       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        4 minutes ago       Running             kube-proxy                0                   aa07550982de9       kube-proxy-xxxfj
	1a975bababdfc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        5 minutes ago       Running             etcd                      0                   8ce6c1a682d5c       etcd-addons-342031
	7d87bbdda87a5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        5 minutes ago       Running             kube-scheduler            0                   a3eeabf960fae       kube-scheduler-addons-342031
	4f49233f8bdc2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        5 minutes ago       Running             kube-controller-manager   0                   4c00e4633b495       kube-controller-manager-addons-342031
	665880b8788e6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        5 minutes ago       Running             kube-apiserver            0                   519bc6c1575ee       kube-apiserver-addons-342031
	
	
	==> coredns [214389dc390da290fe0be44304d2e4275de79963a4347c2da192aec4a043cc88] <==
	[INFO] 10.244.0.8:44754 - 33077 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000294331s
	[INFO] 10.244.0.8:35406 - 31702 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058837s
	[INFO] 10.244.0.8:35406 - 32424 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108327s
	[INFO] 10.244.0.8:56263 - 19589 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073544s
	[INFO] 10.244.0.8:56263 - 31367 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091648s
	[INFO] 10.244.0.8:46604 - 8653 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000057047s
	[INFO] 10.244.0.8:46604 - 35023 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000040978s
	[INFO] 10.244.0.8:60575 - 8058 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151614s
	[INFO] 10.244.0.8:60575 - 28743 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000030657s
	[INFO] 10.244.0.8:50808 - 6850 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005485s
	[INFO] 10.244.0.8:50808 - 26564 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030319s
	[INFO] 10.244.0.8:39296 - 43933 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045647s
	[INFO] 10.244.0.8:39296 - 47263 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000023053s
	[INFO] 10.244.0.8:37039 - 38574 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000042689s
	[INFO] 10.244.0.8:37039 - 23456 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096291s
	[INFO] 10.244.0.22:47887 - 6552 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000453705s
	[INFO] 10.244.0.22:59406 - 36777 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151571s
	[INFO] 10.244.0.22:51100 - 13843 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108325s
	[INFO] 10.244.0.22:33937 - 16849 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000077502s
	[INFO] 10.244.0.22:33724 - 1871 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00017102s
	[INFO] 10.244.0.22:37650 - 6852 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000174764s
	[INFO] 10.244.0.22:55938 - 30152 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00119209s
	[INFO] 10.244.0.22:39715 - 63188 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001474916s
	[INFO] 10.244.0.26:54234 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000448092s
	[INFO] 10.244.0.26:44655 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148009s
	
	
	==> describe nodes <==
	Name:               addons-342031
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-342031
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=addons-342031
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T10_23_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-342031
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:23:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-342031
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:28:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:28:17 +0000   Mon, 29 Jul 2024 10:23:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:28:17 +0000   Mon, 29 Jul 2024 10:23:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:28:17 +0000   Mon, 29 Jul 2024 10:23:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:28:17 +0000   Mon, 29 Jul 2024 10:23:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    addons-342031
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef64a556b04d4dc1b1de2f3ff74bb9cb
	  System UUID:                ef64a556-b04d-4dc1-b1de-2f3ff74bb9cb
	  Boot ID:                    d471fc6e-08fb-4c3c-ab9d-1544ab7820e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  default                     hello-world-app-6778b5fc9f-9tvlv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  headlamp                    headlamp-7867546754-vd7js                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-7db6d8ff4d-7p4nt                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m54s
	  kube-system                 etcd-addons-342031                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-apiserver-addons-342031              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-controller-manager-addons-342031     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-proxy-xxxfj                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-scheduler-addons-342031              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 metrics-server-c59844bb4-xpvk9            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m48s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  local-path-storage          local-path-provisioner-8d985888d-2n787    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node addons-342031 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node addons-342031 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node addons-342031 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m7s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m7s                   kubelet          Node addons-342031 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s                   kubelet          Node addons-342031 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s                   kubelet          Node addons-342031 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m6s                   kubelet          Node addons-342031 status is now: NodeReady
	  Normal  RegisteredNode           4m54s                  node-controller  Node addons-342031 event: Registered Node addons-342031 in Controller
	
	
	==> dmesg <==
	[  +5.197569] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.011517] kauditd_printk_skb: 145 callbacks suppressed
	[  +8.338938] kauditd_printk_skb: 71 callbacks suppressed
	[Jul29 10:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.222823] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.511401] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.696308] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.313767] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.147380] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.164374] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.682264] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.240566] kauditd_printk_skb: 7 callbacks suppressed
	[Jul29 10:25] kauditd_printk_skb: 45 callbacks suppressed
	[  +7.107081] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.302284] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.131048] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.026927] kauditd_printk_skb: 94 callbacks suppressed
	[  +5.099838] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.038833] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.228653] kauditd_printk_skb: 8 callbacks suppressed
	[Jul29 10:26] kauditd_printk_skb: 2 callbacks suppressed
	[ +24.342837] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.264022] kauditd_printk_skb: 33 callbacks suppressed
	[Jul29 10:28] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.357462] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [1a975bababdfcab83d90b0d23d59977dfb34bedf202f7efbeae2397ee24cfda9] <==
	{"level":"warn","ts":"2024-07-29T10:24:39.661937Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.636967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11625"}
	{"level":"info","ts":"2024-07-29T10:24:39.661981Z","caller":"traceutil/trace.go:171","msg":"trace[1368386628] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1093; }","duration":"220.708184ms","start":"2024-07-29T10:24:39.441266Z","end":"2024-07-29T10:24:39.661974Z","steps":["trace[1368386628] 'agreement among raft nodes before linearized reading'  (duration: 220.592572ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:24:39.662917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.533831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-29T10:24:39.663436Z","caller":"traceutil/trace.go:171","msg":"trace[873553489] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1093; }","duration":"112.075064ms","start":"2024-07-29T10:24:39.551352Z","end":"2024-07-29T10:24:39.663427Z","steps":["trace[873553489] 'agreement among raft nodes before linearized reading'  (duration: 111.476513ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:24:39.666234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.997887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85649"}
	{"level":"info","ts":"2024-07-29T10:24:39.666286Z","caller":"traceutil/trace.go:171","msg":"trace[359601509] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1093; }","duration":"180.084813ms","start":"2024-07-29T10:24:39.486193Z","end":"2024-07-29T10:24:39.666278Z","steps":["trace[359601509] 'agreement among raft nodes before linearized reading'  (duration: 176.206598ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T10:24:47.030978Z","caller":"traceutil/trace.go:171","msg":"trace[856797194] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"170.258553ms","start":"2024-07-29T10:24:46.8607Z","end":"2024-07-29T10:24:47.030959Z","steps":["trace[856797194] 'process raft request'  (duration: 170.043405ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T10:24:59.327904Z","caller":"traceutil/trace.go:171","msg":"trace[76740972] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"233.1393ms","start":"2024-07-29T10:24:59.094751Z","end":"2024-07-29T10:24:59.32789Z","steps":["trace[76740972] 'process raft request'  (duration: 233.054114ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T10:25:37.651077Z","caller":"traceutil/trace.go:171","msg":"trace[2136260228] linearizableReadLoop","detail":"{readStateIndex:1571; appliedIndex:1570; }","duration":"119.730832ms","start":"2024-07-29T10:25:37.531316Z","end":"2024-07-29T10:25:37.651047Z","steps":["trace[2136260228] 'read index received'  (duration: 119.548204ms)","trace[2136260228] 'applied index is now lower than readState.Index'  (duration: 182.077µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T10:25:37.651823Z","caller":"traceutil/trace.go:171","msg":"trace[613066166] transaction","detail":"{read_only:false; response_revision:1520; number_of_response:1; }","duration":"210.390724ms","start":"2024-07-29T10:25:37.441422Z","end":"2024-07-29T10:25:37.651812Z","steps":["trace[613066166] 'process raft request'  (duration: 209.488857ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:25:37.652497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.002017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T10:25:37.653258Z","caller":"traceutil/trace.go:171","msg":"trace[1821470579] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:1520; }","duration":"121.977594ms","start":"2024-07-29T10:25:37.531268Z","end":"2024-07-29T10:25:37.653246Z","steps":["trace[1821470579] 'agreement among raft nodes before linearized reading'  (duration: 119.990641ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T10:26:10.095664Z","caller":"traceutil/trace.go:171","msg":"trace[1586516240] linearizableReadLoop","detail":"{readStateIndex:1723; appliedIndex:1722; }","duration":"318.187478ms","start":"2024-07-29T10:26:09.777464Z","end":"2024-07-29T10:26:10.095651Z","steps":["trace[1586516240] 'read index received'  (duration: 318.026338ms)","trace[1586516240] 'applied index is now lower than readState.Index'  (duration: 160.739µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T10:26:10.095921Z","caller":"traceutil/trace.go:171","msg":"trace[1816313529] transaction","detail":"{read_only:false; response_revision:1665; number_of_response:1; }","duration":"384.938535ms","start":"2024-07-29T10:26:09.710971Z","end":"2024-07-29T10:26:10.095909Z","steps":["trace[1816313529] 'process raft request'  (duration: 384.602483ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:26:10.096052Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T10:26:09.710952Z","time spent":"385.011765ms","remote":"127.0.0.1:36184","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1661 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-07-29T10:26:10.096213Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.746178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-29T10:26:10.096277Z","caller":"traceutil/trace.go:171","msg":"trace[671831025] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1665; }","duration":"318.832823ms","start":"2024-07-29T10:26:09.777437Z","end":"2024-07-29T10:26:10.09627Z","steps":["trace[671831025] 'agreement among raft nodes before linearized reading'  (duration: 318.744308ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:26:10.096302Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T10:26:09.777425Z","time spent":"318.870569ms","remote":"127.0.0.1:36102","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":9,"response size":30,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true "}
	{"level":"warn","ts":"2024-07-29T10:26:10.096459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.292767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-07-29T10:26:10.09657Z","caller":"traceutil/trace.go:171","msg":"trace[1250267455] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1665; }","duration":"264.424293ms","start":"2024-07-29T10:26:09.83214Z","end":"2024-07-29T10:26:10.096564Z","steps":["trace[1250267455] 'agreement among raft nodes before linearized reading'  (duration: 264.265303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:26:10.097379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.006506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-29T10:26:10.097428Z","caller":"traceutil/trace.go:171","msg":"trace[167439714] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1665; }","duration":"217.077453ms","start":"2024-07-29T10:26:09.88034Z","end":"2024-07-29T10:26:10.097418Z","steps":["trace[167439714] 'agreement among raft nodes before linearized reading'  (duration: 217.000217ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:26:10.097521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.520262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-29T10:26:10.097778Z","caller":"traceutil/trace.go:171","msg":"trace[1034892266] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1665; }","duration":"232.796592ms","start":"2024-07-29T10:26:09.864973Z","end":"2024-07-29T10:26:10.09777Z","steps":["trace[1034892266] 'agreement among raft nodes before linearized reading'  (duration: 232.526285ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T10:26:14.94637Z","caller":"traceutil/trace.go:171","msg":"trace[420954400] transaction","detail":"{read_only:false; response_revision:1677; number_of_response:1; }","duration":"218.31713ms","start":"2024-07-29T10:26:14.72804Z","end":"2024-07-29T10:26:14.946357Z","steps":["trace[420954400] 'process raft request'  (duration: 218.231021ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:28:18 up 5 min,  0 users,  load average: 0.41, 1.04, 0.58
	Linux addons-342031 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0729 10:25:11.002781       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.226.119:443: connect: connection refused
	E0729 10:25:11.009969       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.226.119:443: connect: connection refused
	E0729 10:25:11.036984       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.226.119:443: connect: connection refused
	I0729 10:25:11.129096       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0729 10:25:30.586063       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.249.49"}
	I0729 10:25:44.317840       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 10:25:44.496219       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.27.159"}
	I0729 10:25:50.045882       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0729 10:25:51.078521       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 10:26:17.147217       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 10:26:42.910863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 10:26:42.910910       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 10:26:42.937800       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 10:26:42.937868       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 10:26:42.972879       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 10:26:42.972935       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 10:26:42.981462       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 10:26:42.981520       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 10:26:42.997288       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 10:26:42.997702       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0729 10:26:43.973821       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 10:26:43.999163       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 10:26:44.008492       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0729 10:28:08.175198       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.23.132"}
	
	
	==> kube-controller-manager [4f49233f8bdc242e81989422ff49ab64d41bea91e75a5e902c38498c8aa34291] <==
	W0729 10:27:11.028196       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:27:11.028308       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:27:20.527110       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:27:20.527296       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:27:20.881575       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:27:20.881622       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:27:28.568738       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:27:28.568784       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:27:59.268470       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:27:59.268696       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:28:02.694392       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:28:02.694447       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:28:07.904326       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:28:07.904658       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 10:28:08.032002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="49.459666ms"
	I0729 10:28:08.047772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="15.032845ms"
	I0729 10:28:08.049133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="49.118µs"
	I0729 10:28:08.053370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="96.937µs"
	I0729 10:28:10.509036       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0729 10:28:10.511244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="5.275µs"
	I0729 10:28:10.516648       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0729 10:28:11.880175       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="12.528303ms"
	I0729 10:28:11.881118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="34.076µs"
	W0729 10:28:11.952047       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:28:11.952095       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [6a6a2c9fa4cd5b1c22709372448330cee907903e60cbb1304fce9e5be57abe23] <==
	I0729 10:23:27.854333       1 server_linux.go:69] "Using iptables proxy"
	I0729 10:23:27.874375       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.224"]
	I0729 10:23:27.982766       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 10:23:27.982861       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 10:23:27.982884       1 server_linux.go:165] "Using iptables Proxier"
	I0729 10:23:27.987783       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 10:23:27.988045       1 server.go:872] "Version info" version="v1.30.3"
	I0729 10:23:27.988076       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:23:27.992214       1 config.go:192] "Starting service config controller"
	I0729 10:23:27.992225       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 10:23:27.992246       1 config.go:101] "Starting endpoint slice config controller"
	I0729 10:23:27.992249       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 10:23:27.992632       1 config.go:319] "Starting node config controller"
	I0729 10:23:27.992640       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 10:23:28.092639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 10:23:28.092673       1 shared_informer.go:320] Caches are synced for node config
	I0729 10:23:28.092683       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7d87bbdda87a505f694af02bd69a0acedf24df447acc92bead4a95f26817026a] <==
	W0729 10:23:09.501344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 10:23:09.501490       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 10:23:09.524429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 10:23:09.524674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 10:23:09.524435       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 10:23:09.524857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 10:23:09.646567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 10:23:09.646803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 10:23:09.658779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 10:23:09.660136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 10:23:09.847089       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 10:23:09.847218       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 10:23:09.847717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 10:23:09.847838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 10:23:09.927311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 10:23:09.927406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 10:23:09.939264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 10:23:09.939422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 10:23:09.948624       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 10:23:09.948827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 10:23:09.984184       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 10:23:09.984282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 10:23:09.990489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 10:23:09.990634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0729 10:23:12.368327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 10:28:09 addons-342031 kubelet[1272]: I0729 10:28:09.842374    1272 scope.go:117] "RemoveContainer" containerID="08fd45d7f0fe9ba560e7aa01c34ee0632d75f348c51869f381c6dfb8cf9ab902"
	Jul 29 10:28:09 addons-342031 kubelet[1272]: I0729 10:28:09.861861    1272 scope.go:117] "RemoveContainer" containerID="08fd45d7f0fe9ba560e7aa01c34ee0632d75f348c51869f381c6dfb8cf9ab902"
	Jul 29 10:28:09 addons-342031 kubelet[1272]: E0729 10:28:09.862868    1272 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08fd45d7f0fe9ba560e7aa01c34ee0632d75f348c51869f381c6dfb8cf9ab902\": container with ID starting with 08fd45d7f0fe9ba560e7aa01c34ee0632d75f348c51869f381c6dfb8cf9ab902 not found: ID does not exist" containerID="08fd45d7f0fe9ba560e7aa01c34ee0632d75f348c51869f381c6dfb8cf9ab902"
	Jul 29 10:28:09 addons-342031 kubelet[1272]: I0729 10:28:09.862906    1272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08fd45d7f0fe9ba560e7aa01c34ee0632d75f348c51869f381c6dfb8cf9ab902"} err="failed to get container status \"08fd45d7f0fe9ba560e7aa01c34ee0632d75f348c51869f381c6dfb8cf9ab902\": rpc error: code = NotFound desc = could not find container \"08fd45d7f0fe9ba560e7aa01c34ee0632d75f348c51869f381c6dfb8cf9ab902\": container with ID starting with 08fd45d7f0fe9ba560e7aa01c34ee0632d75f348c51869f381c6dfb8cf9ab902 not found: ID does not exist"
	Jul 29 10:28:11 addons-342031 kubelet[1272]: I0729 10:28:11.498871    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063c8bb6-aca1-4489-b03f-3c12fa68e1f4" path="/var/lib/kubelet/pods/063c8bb6-aca1-4489-b03f-3c12fa68e1f4/volumes"
	Jul 29 10:28:11 addons-342031 kubelet[1272]: I0729 10:28:11.499286    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1417aa2e-6511-4aa9-8d6f-716b7c3830e3" path="/var/lib/kubelet/pods/1417aa2e-6511-4aa9-8d6f-716b7c3830e3/volumes"
	Jul 29 10:28:11 addons-342031 kubelet[1272]: I0729 10:28:11.500032    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8913bd2-d23f-492c-bc4b-dd5175fff394" path="/var/lib/kubelet/pods/a8913bd2-d23f-492c-bc4b-dd5175fff394/volumes"
	Jul 29 10:28:11 addons-342031 kubelet[1272]: E0729 10:28:11.510381    1272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:28:11 addons-342031 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:28:11 addons-342031 kubelet[1272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:28:11 addons-342031 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:28:11 addons-342031 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:28:13 addons-342031 kubelet[1272]: I0729 10:28:13.735111    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pmgp\" (UniqueName: \"kubernetes.io/projected/e61f3ae5-db28-42bf-b9a8-15ada1f6b931-kube-api-access-4pmgp\") pod \"e61f3ae5-db28-42bf-b9a8-15ada1f6b931\" (UID: \"e61f3ae5-db28-42bf-b9a8-15ada1f6b931\") "
	Jul 29 10:28:13 addons-342031 kubelet[1272]: I0729 10:28:13.735163    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e61f3ae5-db28-42bf-b9a8-15ada1f6b931-webhook-cert\") pod \"e61f3ae5-db28-42bf-b9a8-15ada1f6b931\" (UID: \"e61f3ae5-db28-42bf-b9a8-15ada1f6b931\") "
	Jul 29 10:28:13 addons-342031 kubelet[1272]: I0729 10:28:13.739594    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61f3ae5-db28-42bf-b9a8-15ada1f6b931-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e61f3ae5-db28-42bf-b9a8-15ada1f6b931" (UID: "e61f3ae5-db28-42bf-b9a8-15ada1f6b931"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 29 10:28:13 addons-342031 kubelet[1272]: I0729 10:28:13.740866    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61f3ae5-db28-42bf-b9a8-15ada1f6b931-kube-api-access-4pmgp" (OuterVolumeSpecName: "kube-api-access-4pmgp") pod "e61f3ae5-db28-42bf-b9a8-15ada1f6b931" (UID: "e61f3ae5-db28-42bf-b9a8-15ada1f6b931"). InnerVolumeSpecName "kube-api-access-4pmgp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 10:28:13 addons-342031 kubelet[1272]: I0729 10:28:13.836340    1272 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e61f3ae5-db28-42bf-b9a8-15ada1f6b931-webhook-cert\") on node \"addons-342031\" DevicePath \"\""
	Jul 29 10:28:13 addons-342031 kubelet[1272]: I0729 10:28:13.836397    1272 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4pmgp\" (UniqueName: \"kubernetes.io/projected/e61f3ae5-db28-42bf-b9a8-15ada1f6b931-kube-api-access-4pmgp\") on node \"addons-342031\" DevicePath \"\""
	Jul 29 10:28:13 addons-342031 kubelet[1272]: I0729 10:28:13.867356    1272 scope.go:117] "RemoveContainer" containerID="3258ef351cdfbcb38c0b08f66e147edc358fdae5c9d6303edc676fe621cd1d26"
	Jul 29 10:28:13 addons-342031 kubelet[1272]: I0729 10:28:13.888505    1272 scope.go:117] "RemoveContainer" containerID="3258ef351cdfbcb38c0b08f66e147edc358fdae5c9d6303edc676fe621cd1d26"
	Jul 29 10:28:13 addons-342031 kubelet[1272]: E0729 10:28:13.888966    1272 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3258ef351cdfbcb38c0b08f66e147edc358fdae5c9d6303edc676fe621cd1d26\": container with ID starting with 3258ef351cdfbcb38c0b08f66e147edc358fdae5c9d6303edc676fe621cd1d26 not found: ID does not exist" containerID="3258ef351cdfbcb38c0b08f66e147edc358fdae5c9d6303edc676fe621cd1d26"
	Jul 29 10:28:13 addons-342031 kubelet[1272]: I0729 10:28:13.889013    1272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3258ef351cdfbcb38c0b08f66e147edc358fdae5c9d6303edc676fe621cd1d26"} err="failed to get container status \"3258ef351cdfbcb38c0b08f66e147edc358fdae5c9d6303edc676fe621cd1d26\": rpc error: code = NotFound desc = could not find container \"3258ef351cdfbcb38c0b08f66e147edc358fdae5c9d6303edc676fe621cd1d26\": container with ID starting with 3258ef351cdfbcb38c0b08f66e147edc358fdae5c9d6303edc676fe621cd1d26 not found: ID does not exist"
	Jul 29 10:28:14 addons-342031 kubelet[1272]: I0729 10:28:14.698082    1272 scope.go:117] "RemoveContainer" containerID="6da3d4eee860723c8e6c675e32484831d1c87139c7da5f45bbb14f0607173ffa"
	Jul 29 10:28:14 addons-342031 kubelet[1272]: I0729 10:28:14.722266    1272 scope.go:117] "RemoveContainer" containerID="341ef0c8d58e85e31d9e024e6426ff606e31ada3f0336a4512f7e3dd0f386b1f"
	Jul 29 10:28:15 addons-342031 kubelet[1272]: I0729 10:28:15.496891    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61f3ae5-db28-42bf-b9a8-15ada1f6b931" path="/var/lib/kubelet/pods/e61f3ae5-db28-42bf-b9a8-15ada1f6b931/volumes"
	
	
	==> storage-provisioner [49ac166beb18a8bae38259ddef3712206a2bfee3cd8d4af1a79fc3569021ed8d] <==
	I0729 10:23:33.149814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 10:23:33.170370       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 10:23:33.170422       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 10:23:33.186206       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 10:23:33.186910       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f93029a-5378-4c23-a80c-b0508d8c0c0f", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-342031_b562a95d-a1b0-4a80-bdf0-e80a1848626e became leader
	I0729 10:23:33.190272       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-342031_b562a95d-a1b0-4a80-bdf0-e80a1848626e!
	I0729 10:23:33.291905       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-342031_b562a95d-a1b0-4a80-bdf0-e80a1848626e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-342031 -n addons-342031
helpers_test.go:261: (dbg) Run:  kubectl --context addons-342031 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (359.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.734991ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-xpvk9" [b347f8e7-4e0d-4d6c-98f1-e2325cffef0e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005301722s
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (65.737505ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 2m14.862571809s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (64.763338ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 2m18.196948926s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (66.963868ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 2m24.189653741s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (69.948296ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 2m30.051052782s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (76.451986ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 2m38.264595364s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (68.412283ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 2m53.45749501s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (61.073644ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 3m23.716935659s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (66.017885ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 3m41.038221482s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (66.229519ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 4m36.356937054s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (65.83153ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 5m43.062336772s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (67.513158ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 6m26.712134777s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (64.645777ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 7m16.593290963s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-342031 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-342031 top pods -n kube-system: exit status 1 (65.097113ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-7p4nt, age: 8m6.355599537s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-342031 -n addons-342031
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-342031 logs -n 25: (1.343749536s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-876146                                                                     | download-only-876146 | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC | 29 Jul 24 10:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-960068 | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC |                     |
	|         | binary-mirror-960068                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33367                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-960068                                                                     | binary-mirror-960068 | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC | 29 Jul 24 10:22 UTC |
	| addons  | enable dashboard -p                                                                         | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC |                     |
	|         | addons-342031                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC |                     |
	|         | addons-342031                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-342031 --wait=true                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:22 UTC | 29 Jul 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | -p addons-342031                                                                            |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-342031 ssh cat                                                                       | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | /opt/local-path-provisioner/pvc-48e69630-5ff6-45b0-be49-8c195291cc40_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | -p addons-342031                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | addons-342031                                                                               |                      |         |         |                     |                     |
	| ip      | addons-342031 ip                                                                            | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC | 29 Jul 24 10:25 UTC |
	|         | addons-342031                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-342031 ssh curl -s                                                                   | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:25 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-342031 addons                                                                        | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:26 UTC | 29 Jul 24 10:26 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-342031 addons                                                                        | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:26 UTC | 29 Jul 24 10:26 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-342031 ip                                                                            | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:28 UTC | 29 Jul 24 10:28 UTC |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:28 UTC | 29 Jul 24 10:28 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-342031 addons disable                                                                | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:28 UTC | 29 Jul 24 10:28 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-342031 addons                                                                        | addons-342031        | jenkins | v1.33.1 | 29 Jul 24 10:31 UTC | 29 Jul 24 10:31 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:22:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:22:26.509615   12698 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:22:26.509721   12698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:22:26.509731   12698 out.go:304] Setting ErrFile to fd 2...
	I0729 10:22:26.509735   12698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:22:26.509914   12698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:22:26.510506   12698 out.go:298] Setting JSON to false
	I0729 10:22:26.511389   12698 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":292,"bootTime":1722248254,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:22:26.511444   12698 start.go:139] virtualization: kvm guest
	I0729 10:22:26.513397   12698 out.go:177] * [addons-342031] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:22:26.514719   12698 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:22:26.514718   12698 notify.go:220] Checking for updates...
	I0729 10:22:26.516186   12698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:22:26.517423   12698 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:22:26.518519   12698 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:22:26.519638   12698 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 10:22:26.520663   12698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:22:26.521870   12698 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:22:26.553944   12698 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 10:22:26.555240   12698 start.go:297] selected driver: kvm2
	I0729 10:22:26.555261   12698 start.go:901] validating driver "kvm2" against <nil>
	I0729 10:22:26.555273   12698 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:22:26.555930   12698 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:22:26.555994   12698 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:22:26.569912   12698 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:22:26.569951   12698 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:22:26.570162   12698 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:22:26.570219   12698 cni.go:84] Creating CNI manager for ""
	I0729 10:22:26.570231   12698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:22:26.570237   12698 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:22:26.570286   12698 start.go:340] cluster config:
	{Name:addons-342031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-342031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:22:26.570368   12698 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:22:26.572152   12698 out.go:177] * Starting "addons-342031" primary control-plane node in "addons-342031" cluster
	I0729 10:22:26.573232   12698 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:22:26.573267   12698 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:22:26.573279   12698 cache.go:56] Caching tarball of preloaded images
	I0729 10:22:26.573357   12698 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:22:26.573370   12698 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:22:26.573697   12698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/config.json ...
	I0729 10:22:26.573722   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/config.json: {Name:mkb6347d0153e8c41bb0cc11c9c9fd0fb7c24f8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:22:26.573892   12698 start.go:360] acquireMachinesLock for addons-342031: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:22:26.573954   12698 start.go:364] duration metric: took 44.453µs to acquireMachinesLock for "addons-342031"
	I0729 10:22:26.573975   12698 start.go:93] Provisioning new machine with config: &{Name:addons-342031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-342031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:22:26.574046   12698 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 10:22:26.575638   12698 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 10:22:26.575757   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:22:26.575805   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:22:26.589669   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39733
	I0729 10:22:26.590158   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:22:26.590932   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:22:26.590956   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:22:26.591306   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:22:26.591524   12698 main.go:141] libmachine: (addons-342031) Calling .GetMachineName
	I0729 10:22:26.591687   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:26.591867   12698 start.go:159] libmachine.API.Create for "addons-342031" (driver="kvm2")
	I0729 10:22:26.591895   12698 client.go:168] LocalClient.Create starting
	I0729 10:22:26.591928   12698 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 10:22:26.787276   12698 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 10:22:26.935076   12698 main.go:141] libmachine: Running pre-create checks...
	I0729 10:22:26.935101   12698 main.go:141] libmachine: (addons-342031) Calling .PreCreateCheck
	I0729 10:22:26.935556   12698 main.go:141] libmachine: (addons-342031) Calling .GetConfigRaw
	I0729 10:22:26.935980   12698 main.go:141] libmachine: Creating machine...
	I0729 10:22:26.935997   12698 main.go:141] libmachine: (addons-342031) Calling .Create
	I0729 10:22:26.936139   12698 main.go:141] libmachine: (addons-342031) Creating KVM machine...
	I0729 10:22:26.937327   12698 main.go:141] libmachine: (addons-342031) DBG | found existing default KVM network
	I0729 10:22:26.938077   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:26.937934   12720 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0729 10:22:26.938108   12698 main.go:141] libmachine: (addons-342031) DBG | created network xml: 
	I0729 10:22:26.938122   12698 main.go:141] libmachine: (addons-342031) DBG | <network>
	I0729 10:22:26.938131   12698 main.go:141] libmachine: (addons-342031) DBG |   <name>mk-addons-342031</name>
	I0729 10:22:26.938136   12698 main.go:141] libmachine: (addons-342031) DBG |   <dns enable='no'/>
	I0729 10:22:26.938186   12698 main.go:141] libmachine: (addons-342031) DBG |   
	I0729 10:22:26.938220   12698 main.go:141] libmachine: (addons-342031) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 10:22:26.938264   12698 main.go:141] libmachine: (addons-342031) DBG |     <dhcp>
	I0729 10:22:26.938288   12698 main.go:141] libmachine: (addons-342031) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 10:22:26.938295   12698 main.go:141] libmachine: (addons-342031) DBG |     </dhcp>
	I0729 10:22:26.938301   12698 main.go:141] libmachine: (addons-342031) DBG |   </ip>
	I0729 10:22:26.938309   12698 main.go:141] libmachine: (addons-342031) DBG |   
	I0729 10:22:26.938319   12698 main.go:141] libmachine: (addons-342031) DBG | </network>
	I0729 10:22:26.938332   12698 main.go:141] libmachine: (addons-342031) DBG | 
	I0729 10:22:26.943466   12698 main.go:141] libmachine: (addons-342031) DBG | trying to create private KVM network mk-addons-342031 192.168.39.0/24...
	I0729 10:22:27.007201   12698 main.go:141] libmachine: (addons-342031) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031 ...
	I0729 10:22:27.007230   12698 main.go:141] libmachine: (addons-342031) DBG | private KVM network mk-addons-342031 192.168.39.0/24 created
	I0729 10:22:27.007250   12698 main.go:141] libmachine: (addons-342031) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:22:27.007265   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:27.007149   12720 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:22:27.007312   12698 main.go:141] libmachine: (addons-342031) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 10:22:27.271201   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:27.271085   12720 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa...
	I0729 10:22:27.433086   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:27.432945   12720 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/addons-342031.rawdisk...
	I0729 10:22:27.433114   12698 main.go:141] libmachine: (addons-342031) DBG | Writing magic tar header
	I0729 10:22:27.433128   12698 main.go:141] libmachine: (addons-342031) DBG | Writing SSH key tar header
	I0729 10:22:27.433140   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:27.433073   12720 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031 ...
	I0729 10:22:27.433222   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031
	I0729 10:22:27.433245   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 10:22:27.433254   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031 (perms=drwx------)
	I0729 10:22:27.433261   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:22:27.433269   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 10:22:27.433284   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 10:22:27.433294   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 10:22:27.433305   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 10:22:27.433314   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 10:22:27.433321   12698 main.go:141] libmachine: (addons-342031) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 10:22:27.433329   12698 main.go:141] libmachine: (addons-342031) Creating domain...
	I0729 10:22:27.433340   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 10:22:27.433347   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home/jenkins
	I0729 10:22:27.433353   12698 main.go:141] libmachine: (addons-342031) DBG | Checking permissions on dir: /home
	I0729 10:22:27.433360   12698 main.go:141] libmachine: (addons-342031) DBG | Skipping /home - not owner
	I0729 10:22:27.434323   12698 main.go:141] libmachine: (addons-342031) define libvirt domain using xml: 
	I0729 10:22:27.434352   12698 main.go:141] libmachine: (addons-342031) <domain type='kvm'>
	I0729 10:22:27.434362   12698 main.go:141] libmachine: (addons-342031)   <name>addons-342031</name>
	I0729 10:22:27.434377   12698 main.go:141] libmachine: (addons-342031)   <memory unit='MiB'>4000</memory>
	I0729 10:22:27.434385   12698 main.go:141] libmachine: (addons-342031)   <vcpu>2</vcpu>
	I0729 10:22:27.434390   12698 main.go:141] libmachine: (addons-342031)   <features>
	I0729 10:22:27.434397   12698 main.go:141] libmachine: (addons-342031)     <acpi/>
	I0729 10:22:27.434401   12698 main.go:141] libmachine: (addons-342031)     <apic/>
	I0729 10:22:27.434405   12698 main.go:141] libmachine: (addons-342031)     <pae/>
	I0729 10:22:27.434410   12698 main.go:141] libmachine: (addons-342031)     
	I0729 10:22:27.434417   12698 main.go:141] libmachine: (addons-342031)   </features>
	I0729 10:22:27.434421   12698 main.go:141] libmachine: (addons-342031)   <cpu mode='host-passthrough'>
	I0729 10:22:27.434427   12698 main.go:141] libmachine: (addons-342031)   
	I0729 10:22:27.434438   12698 main.go:141] libmachine: (addons-342031)   </cpu>
	I0729 10:22:27.434444   12698 main.go:141] libmachine: (addons-342031)   <os>
	I0729 10:22:27.434448   12698 main.go:141] libmachine: (addons-342031)     <type>hvm</type>
	I0729 10:22:27.434479   12698 main.go:141] libmachine: (addons-342031)     <boot dev='cdrom'/>
	I0729 10:22:27.434503   12698 main.go:141] libmachine: (addons-342031)     <boot dev='hd'/>
	I0729 10:22:27.434515   12698 main.go:141] libmachine: (addons-342031)     <bootmenu enable='no'/>
	I0729 10:22:27.434530   12698 main.go:141] libmachine: (addons-342031)   </os>
	I0729 10:22:27.434542   12698 main.go:141] libmachine: (addons-342031)   <devices>
	I0729 10:22:27.434555   12698 main.go:141] libmachine: (addons-342031)     <disk type='file' device='cdrom'>
	I0729 10:22:27.434586   12698 main.go:141] libmachine: (addons-342031)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/boot2docker.iso'/>
	I0729 10:22:27.434599   12698 main.go:141] libmachine: (addons-342031)       <target dev='hdc' bus='scsi'/>
	I0729 10:22:27.434615   12698 main.go:141] libmachine: (addons-342031)       <readonly/>
	I0729 10:22:27.434632   12698 main.go:141] libmachine: (addons-342031)     </disk>
	I0729 10:22:27.434650   12698 main.go:141] libmachine: (addons-342031)     <disk type='file' device='disk'>
	I0729 10:22:27.434665   12698 main.go:141] libmachine: (addons-342031)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 10:22:27.434676   12698 main.go:141] libmachine: (addons-342031)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/addons-342031.rawdisk'/>
	I0729 10:22:27.434682   12698 main.go:141] libmachine: (addons-342031)       <target dev='hda' bus='virtio'/>
	I0729 10:22:27.434687   12698 main.go:141] libmachine: (addons-342031)     </disk>
	I0729 10:22:27.434692   12698 main.go:141] libmachine: (addons-342031)     <interface type='network'>
	I0729 10:22:27.434721   12698 main.go:141] libmachine: (addons-342031)       <source network='mk-addons-342031'/>
	I0729 10:22:27.434737   12698 main.go:141] libmachine: (addons-342031)       <model type='virtio'/>
	I0729 10:22:27.434748   12698 main.go:141] libmachine: (addons-342031)     </interface>
	I0729 10:22:27.434758   12698 main.go:141] libmachine: (addons-342031)     <interface type='network'>
	I0729 10:22:27.434770   12698 main.go:141] libmachine: (addons-342031)       <source network='default'/>
	I0729 10:22:27.434775   12698 main.go:141] libmachine: (addons-342031)       <model type='virtio'/>
	I0729 10:22:27.434782   12698 main.go:141] libmachine: (addons-342031)     </interface>
	I0729 10:22:27.434791   12698 main.go:141] libmachine: (addons-342031)     <serial type='pty'>
	I0729 10:22:27.434803   12698 main.go:141] libmachine: (addons-342031)       <target port='0'/>
	I0729 10:22:27.434817   12698 main.go:141] libmachine: (addons-342031)     </serial>
	I0729 10:22:27.434828   12698 main.go:141] libmachine: (addons-342031)     <console type='pty'>
	I0729 10:22:27.434842   12698 main.go:141] libmachine: (addons-342031)       <target type='serial' port='0'/>
	I0729 10:22:27.434858   12698 main.go:141] libmachine: (addons-342031)     </console>
	I0729 10:22:27.434866   12698 main.go:141] libmachine: (addons-342031)     <rng model='virtio'>
	I0729 10:22:27.434874   12698 main.go:141] libmachine: (addons-342031)       <backend model='random'>/dev/random</backend>
	I0729 10:22:27.434883   12698 main.go:141] libmachine: (addons-342031)     </rng>
	I0729 10:22:27.434900   12698 main.go:141] libmachine: (addons-342031)     
	I0729 10:22:27.434916   12698 main.go:141] libmachine: (addons-342031)     
	I0729 10:22:27.434928   12698 main.go:141] libmachine: (addons-342031)   </devices>
	I0729 10:22:27.434939   12698 main.go:141] libmachine: (addons-342031) </domain>
	I0729 10:22:27.434953   12698 main.go:141] libmachine: (addons-342031) 
	I0729 10:22:27.440547   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:ff:f9:9d in network default
	I0729 10:22:27.441089   12698 main.go:141] libmachine: (addons-342031) Ensuring networks are active...
	I0729 10:22:27.441108   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:27.441734   12698 main.go:141] libmachine: (addons-342031) Ensuring network default is active
	I0729 10:22:27.442102   12698 main.go:141] libmachine: (addons-342031) Ensuring network mk-addons-342031 is active
	I0729 10:22:27.442532   12698 main.go:141] libmachine: (addons-342031) Getting domain xml...
	I0729 10:22:27.443154   12698 main.go:141] libmachine: (addons-342031) Creating domain...
	I0729 10:22:28.821137   12698 main.go:141] libmachine: (addons-342031) Waiting to get IP...
	I0729 10:22:28.822041   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:28.822408   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:28.822503   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:28.822437   12720 retry.go:31] will retry after 195.541091ms: waiting for machine to come up
	I0729 10:22:29.019770   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:29.020248   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:29.020277   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:29.020177   12720 retry.go:31] will retry after 309.221715ms: waiting for machine to come up
	I0729 10:22:29.330544   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:29.330982   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:29.331003   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:29.330938   12720 retry.go:31] will retry after 355.964011ms: waiting for machine to come up
	I0729 10:22:29.688385   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:29.688926   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:29.688954   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:29.688887   12720 retry.go:31] will retry after 484.927173ms: waiting for machine to come up
	I0729 10:22:30.175884   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:30.176403   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:30.176442   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:30.176354   12720 retry.go:31] will retry after 689.808028ms: waiting for machine to come up
	I0729 10:22:30.868197   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:30.868660   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:30.868685   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:30.868578   12720 retry.go:31] will retry after 916.035718ms: waiting for machine to come up
	I0729 10:22:31.786379   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:31.786834   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:31.786865   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:31.786752   12720 retry.go:31] will retry after 751.473166ms: waiting for machine to come up
	I0729 10:22:32.539734   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:32.540095   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:32.540116   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:32.540058   12720 retry.go:31] will retry after 988.862367ms: waiting for machine to come up
	I0729 10:22:33.530089   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:33.530398   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:33.530426   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:33.530345   12720 retry.go:31] will retry after 1.4355459s: waiting for machine to come up
	I0729 10:22:34.967825   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:34.968197   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:34.968221   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:34.968154   12720 retry.go:31] will retry after 1.673804403s: waiting for machine to come up
	I0729 10:22:36.643776   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:36.644310   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:36.644334   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:36.644248   12720 retry.go:31] will retry after 2.552383352s: waiting for machine to come up
	I0729 10:22:39.199894   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:39.200354   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:39.200383   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:39.200305   12720 retry.go:31] will retry after 2.297424729s: waiting for machine to come up
	I0729 10:22:41.500667   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:41.501034   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:41.501053   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:41.500991   12720 retry.go:31] will retry after 3.517350765s: waiting for machine to come up
	I0729 10:22:45.022370   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:45.022689   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find current IP address of domain addons-342031 in network mk-addons-342031
	I0729 10:22:45.022733   12698 main.go:141] libmachine: (addons-342031) DBG | I0729 10:22:45.022649   12720 retry.go:31] will retry after 4.782196854s: waiting for machine to come up
	I0729 10:22:49.807334   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:49.807781   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has current primary IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:49.807799   12698 main.go:141] libmachine: (addons-342031) Found IP for machine: 192.168.39.224
	I0729 10:22:49.807812   12698 main.go:141] libmachine: (addons-342031) Reserving static IP address...
	I0729 10:22:49.808145   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find host DHCP lease matching {name: "addons-342031", mac: "52:54:00:26:46:e4", ip: "192.168.39.224"} in network mk-addons-342031
	I0729 10:22:49.878329   12698 main.go:141] libmachine: (addons-342031) DBG | Getting to WaitForSSH function...
	I0729 10:22:49.878359   12698 main.go:141] libmachine: (addons-342031) Reserved static IP address: 192.168.39.224
	I0729 10:22:49.878373   12698 main.go:141] libmachine: (addons-342031) Waiting for SSH to be available...
	I0729 10:22:49.880810   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:49.881081   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031
	I0729 10:22:49.881118   12698 main.go:141] libmachine: (addons-342031) DBG | unable to find defined IP address of network mk-addons-342031 interface with MAC address 52:54:00:26:46:e4
	I0729 10:22:49.881269   12698 main.go:141] libmachine: (addons-342031) DBG | Using SSH client type: external
	I0729 10:22:49.881311   12698 main.go:141] libmachine: (addons-342031) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa (-rw-------)
	I0729 10:22:49.881350   12698 main.go:141] libmachine: (addons-342031) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:22:49.881364   12698 main.go:141] libmachine: (addons-342031) DBG | About to run SSH command:
	I0729 10:22:49.881380   12698 main.go:141] libmachine: (addons-342031) DBG | exit 0
	I0729 10:22:49.892439   12698 main.go:141] libmachine: (addons-342031) DBG | SSH cmd err, output: exit status 255: 
	I0729 10:22:49.892464   12698 main.go:141] libmachine: (addons-342031) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 10:22:49.892476   12698 main.go:141] libmachine: (addons-342031) DBG | command : exit 0
	I0729 10:22:49.892484   12698 main.go:141] libmachine: (addons-342031) DBG | err     : exit status 255
	I0729 10:22:49.892495   12698 main.go:141] libmachine: (addons-342031) DBG | output  : 
	I0729 10:22:52.892685   12698 main.go:141] libmachine: (addons-342031) DBG | Getting to WaitForSSH function...
	I0729 10:22:52.895786   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:52.896257   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:52.896286   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:52.896350   12698 main.go:141] libmachine: (addons-342031) DBG | Using SSH client type: external
	I0729 10:22:52.896369   12698 main.go:141] libmachine: (addons-342031) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa (-rw-------)
	I0729 10:22:52.896470   12698 main.go:141] libmachine: (addons-342031) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:22:52.896491   12698 main.go:141] libmachine: (addons-342031) DBG | About to run SSH command:
	I0729 10:22:52.896501   12698 main.go:141] libmachine: (addons-342031) DBG | exit 0
	I0729 10:22:53.018886   12698 main.go:141] libmachine: (addons-342031) DBG | SSH cmd err, output: <nil>: 
	I0729 10:22:53.019180   12698 main.go:141] libmachine: (addons-342031) KVM machine creation complete!
	I0729 10:22:53.019451   12698 main.go:141] libmachine: (addons-342031) Calling .GetConfigRaw
	I0729 10:22:53.019957   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:53.020152   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:53.020314   12698 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 10:22:53.020328   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:22:53.021481   12698 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 10:22:53.021498   12698 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 10:22:53.021506   12698 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 10:22:53.021512   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.023922   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.024313   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.024340   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.024467   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:53.024675   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.024862   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.024989   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:53.025141   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:53.025321   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:53.025342   12698 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 10:22:53.122113   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:22:53.122139   12698 main.go:141] libmachine: Detecting the provisioner...
	I0729 10:22:53.122147   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.125038   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.125408   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.125443   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.125641   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:53.125831   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.125981   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.126106   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:53.126241   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:53.126388   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:53.126397   12698 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 10:22:53.223389   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 10:22:53.223476   12698 main.go:141] libmachine: found compatible host: buildroot
	I0729 10:22:53.223490   12698 main.go:141] libmachine: Provisioning with buildroot...
	I0729 10:22:53.223503   12698 main.go:141] libmachine: (addons-342031) Calling .GetMachineName
	I0729 10:22:53.223736   12698 buildroot.go:166] provisioning hostname "addons-342031"
	I0729 10:22:53.223759   12698 main.go:141] libmachine: (addons-342031) Calling .GetMachineName
	I0729 10:22:53.223929   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.226140   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.226436   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.226462   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.226601   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:53.226780   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.226910   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.227023   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:53.227197   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:53.227361   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:53.227374   12698 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-342031 && echo "addons-342031" | sudo tee /etc/hostname
	I0729 10:22:53.341863   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-342031
	
	I0729 10:22:53.341887   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.344719   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.345165   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.345189   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.345404   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:53.345584   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.345742   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.345886   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:53.346079   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:53.346233   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:53.346249   12698 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-342031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-342031/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-342031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:22:53.452067   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:22:53.452105   12698 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 10:22:53.452157   12698 buildroot.go:174] setting up certificates
	I0729 10:22:53.452171   12698 provision.go:84] configureAuth start
	I0729 10:22:53.452190   12698 main.go:141] libmachine: (addons-342031) Calling .GetMachineName
	I0729 10:22:53.452461   12698 main.go:141] libmachine: (addons-342031) Calling .GetIP
	I0729 10:22:53.454977   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.455267   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.455293   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.455421   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.457342   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.457631   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.457656   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.457784   12698 provision.go:143] copyHostCerts
	I0729 10:22:53.457848   12698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 10:22:53.457998   12698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 10:22:53.458083   12698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 10:22:53.458145   12698 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.addons-342031 san=[127.0.0.1 192.168.39.224 addons-342031 localhost minikube]
	I0729 10:22:53.970609   12698 provision.go:177] copyRemoteCerts
	I0729 10:22:53.970664   12698 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:22:53.970686   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:53.973214   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.973517   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:53.973546   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:53.973663   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:53.973843   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:53.974015   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:53.974148   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:22:54.052636   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:22:54.076701   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 10:22:54.099658   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:22:54.123583   12698 provision.go:87] duration metric: took 671.393634ms to configureAuth
	I0729 10:22:54.123608   12698 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:22:54.123767   12698 config.go:182] Loaded profile config "addons-342031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:22:54.123841   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:54.126445   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.126735   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.126768   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.126938   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:54.127156   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.127357   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.127488   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:54.127623   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:54.127789   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:54.127804   12698 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:22:54.385444   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:22:54.385474   12698 main.go:141] libmachine: Checking connection to Docker...
	I0729 10:22:54.385506   12698 main.go:141] libmachine: (addons-342031) Calling .GetURL
	I0729 10:22:54.386942   12698 main.go:141] libmachine: (addons-342031) DBG | Using libvirt version 6000000
	I0729 10:22:54.389227   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.389581   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.389612   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.389711   12698 main.go:141] libmachine: Docker is up and running!
	I0729 10:22:54.389725   12698 main.go:141] libmachine: Reticulating splines...
	I0729 10:22:54.389733   12698 client.go:171] duration metric: took 27.797830436s to LocalClient.Create
	I0729 10:22:54.389758   12698 start.go:167] duration metric: took 27.797890326s to libmachine.API.Create "addons-342031"
	I0729 10:22:54.389771   12698 start.go:293] postStartSetup for "addons-342031" (driver="kvm2")
	I0729 10:22:54.389784   12698 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:22:54.389799   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:54.390023   12698 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:22:54.390044   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:54.392254   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.392587   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.392607   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.392800   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:54.392956   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.393122   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:54.393232   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:22:54.473735   12698 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:22:54.477956   12698 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:22:54.477982   12698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 10:22:54.478071   12698 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 10:22:54.478102   12698 start.go:296] duration metric: took 88.323931ms for postStartSetup
	I0729 10:22:54.478160   12698 main.go:141] libmachine: (addons-342031) Calling .GetConfigRaw
	I0729 10:22:54.478694   12698 main.go:141] libmachine: (addons-342031) Calling .GetIP
	I0729 10:22:54.481118   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.481465   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.481488   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.481749   12698 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/config.json ...
	I0729 10:22:54.481929   12698 start.go:128] duration metric: took 27.907873744s to createHost
	I0729 10:22:54.481958   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:54.484131   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.484454   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.484474   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.484596   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:54.484860   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.485017   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.485155   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:54.485332   12698 main.go:141] libmachine: Using SSH client type: native
	I0729 10:22:54.485490   12698 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0729 10:22:54.485502   12698 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:22:54.583486   12698 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722248574.564028643
	
	I0729 10:22:54.583508   12698 fix.go:216] guest clock: 1722248574.564028643
	I0729 10:22:54.583514   12698 fix.go:229] Guest: 2024-07-29 10:22:54.564028643 +0000 UTC Remote: 2024-07-29 10:22:54.481940225 +0000 UTC m=+28.006298665 (delta=82.088418ms)
	I0729 10:22:54.583556   12698 fix.go:200] guest clock delta is within tolerance: 82.088418ms
	I0729 10:22:54.583567   12698 start.go:83] releasing machines lock for "addons-342031", held for 28.009600176s
	I0729 10:22:54.583591   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:54.583838   12698 main.go:141] libmachine: (addons-342031) Calling .GetIP
	I0729 10:22:54.586516   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.586951   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.586976   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.587111   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:54.587559   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:54.587734   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:22:54.587811   12698 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:22:54.587853   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:54.587923   12698 ssh_runner.go:195] Run: cat /version.json
	I0729 10:22:54.587946   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:22:54.590516   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.590664   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.590985   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.591018   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:54.591037   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.591083   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:54.591150   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:54.591302   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:22:54.591363   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.591464   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:22:54.591577   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:54.591674   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:22:54.591740   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:22:54.591777   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:22:54.663824   12698 ssh_runner.go:195] Run: systemctl --version
	I0729 10:22:54.691554   12698 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:22:54.849005   12698 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:22:54.855374   12698 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:22:54.855464   12698 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:22:54.871260   12698 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:22:54.871285   12698 start.go:495] detecting cgroup driver to use...
	I0729 10:22:54.871351   12698 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:22:54.886915   12698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:22:54.900702   12698 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:22:54.900751   12698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:22:54.913925   12698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:22:54.926818   12698 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:22:55.037036   12698 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:22:55.181561   12698 docker.go:233] disabling docker service ...
	I0729 10:22:55.181621   12698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:22:55.202224   12698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:22:55.215904   12698 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:22:55.363613   12698 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:22:55.484120   12698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:22:55.499134   12698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:22:55.518491   12698 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:22:55.518539   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.529356   12698 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:22:55.529436   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.540203   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.550853   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.561568   12698 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:22:55.572562   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.583485   12698 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.601219   12698 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:22:55.612562   12698 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:22:55.622499   12698 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 10:22:55.622614   12698 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 10:22:55.635857   12698 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:22:55.646240   12698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:22:55.763578   12698 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:22:55.908295   12698 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:22:55.908386   12698 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:22:55.912856   12698 start.go:563] Will wait 60s for crictl version
	I0729 10:22:55.912912   12698 ssh_runner.go:195] Run: which crictl
	I0729 10:22:55.916591   12698 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:22:55.956978   12698 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:22:55.957108   12698 ssh_runner.go:195] Run: crio --version
	I0729 10:22:55.985476   12698 ssh_runner.go:195] Run: crio --version
	I0729 10:22:56.017764   12698 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:22:56.019180   12698 main.go:141] libmachine: (addons-342031) Calling .GetIP
	I0729 10:22:56.021767   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:56.022099   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:22:56.022117   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:22:56.022337   12698 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:22:56.026399   12698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:22:56.038809   12698 kubeadm.go:883] updating cluster {Name:addons-342031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-342031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 10:22:56.038918   12698 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:22:56.038958   12698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:22:56.070365   12698 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 10:22:56.070436   12698 ssh_runner.go:195] Run: which lz4
	I0729 10:22:56.074372   12698 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 10:22:56.078477   12698 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:22:56.078508   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 10:22:57.482788   12698 crio.go:462] duration metric: took 1.408437287s to copy over tarball
	I0729 10:22:57.482866   12698 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:22:59.818324   12698 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.335433207s)
	I0729 10:22:59.818354   12698 crio.go:469] duration metric: took 2.335534339s to extract the tarball
	I0729 10:22:59.818369   12698 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:22:59.856818   12698 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:22:59.899098   12698 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 10:22:59.899133   12698 cache_images.go:84] Images are preloaded, skipping loading
	I0729 10:22:59.899143   12698 kubeadm.go:934] updating node { 192.168.39.224 8443 v1.30.3 crio true true} ...
	I0729 10:22:59.899262   12698 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-342031 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-342031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:22:59.899330   12698 ssh_runner.go:195] Run: crio config
	I0729 10:22:59.945082   12698 cni.go:84] Creating CNI manager for ""
	I0729 10:22:59.945105   12698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:22:59.945118   12698 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:22:59.945147   12698 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-342031 NodeName:addons-342031 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:22:59.945301   12698 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-342031"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:22:59.945359   12698 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:22:59.955352   12698 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:22:59.955430   12698 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 10:22:59.964931   12698 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 10:22:59.981626   12698 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:22:59.998252   12698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 10:23:00.016127   12698 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0729 10:23:00.020257   12698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:23:00.033296   12698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:23:00.144045   12698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:23:00.161025   12698 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031 for IP: 192.168.39.224
	I0729 10:23:00.161050   12698 certs.go:194] generating shared ca certs ...
	I0729 10:23:00.161069   12698 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.161227   12698 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 10:23:00.551886   12698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt ...
	I0729 10:23:00.551921   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt: {Name:mka8cf7129dad81b43b458c80907bb582a244c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.552123   12698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key ...
	I0729 10:23:00.552140   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key: {Name:mk0d4a0975e994627d0a57853c3533e5941aaaed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.552251   12698 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 10:23:00.675483   12698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt ...
	I0729 10:23:00.675516   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt: {Name:mk97ca6b11acfe37f69f07b0ad2f80f38e3821b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.675706   12698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key ...
	I0729 10:23:00.675721   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key: {Name:mkc24eedf8d015704d0c2fb9cb7ecfdd6327465e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.675829   12698 certs.go:256] generating profile certs ...
	I0729 10:23:00.675909   12698 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.key
	I0729 10:23:00.675932   12698 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt with IP's: []
	I0729 10:23:00.855889   12698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt ...
	I0729 10:23:00.855920   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: {Name:mk37221677c567e713e2630239b01169668a5d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.856117   12698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.key ...
	I0729 10:23:00.856133   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.key: {Name:mkb51aeddb9569ca59fd2c15435e5e96e355f414 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.856243   12698 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key.0accf3d3
	I0729 10:23:00.856265   12698 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt.0accf3d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.224]
	I0729 10:23:00.989586   12698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt.0accf3d3 ...
	I0729 10:23:00.989617   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt.0accf3d3: {Name:mke401fbeba18fa1c710817d8169aadd5ba6547c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.989775   12698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key.0accf3d3 ...
	I0729 10:23:00.989792   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key.0accf3d3: {Name:mk0e7aed75425bcb0779fff6de3d79143d9c1b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:00.989868   12698 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt.0accf3d3 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt
	I0729 10:23:00.989947   12698 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key.0accf3d3 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key
	I0729 10:23:00.989998   12698 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.key
	I0729 10:23:00.990018   12698 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.crt with IP's: []
	I0729 10:23:01.314898   12698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.crt ...
	I0729 10:23:01.314934   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.crt: {Name:mk37096704201e38d1ef496c8563f06c21b8bd10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:01.315093   12698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.key ...
	I0729 10:23:01.315103   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.key: {Name:mk4912fd096558512f1a3b241f31bad5af303652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:01.315258   12698 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:23:01.315290   12698 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:23:01.315314   12698 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:23:01.315337   12698 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 10:23:01.315934   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:23:01.344813   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:23:01.370004   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:23:01.395105   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:23:01.421073   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 10:23:01.445856   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 10:23:01.470509   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:23:01.495201   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:23:01.525292   12698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:23:01.549446   12698 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:23:01.567088   12698 ssh_runner.go:195] Run: openssl version
	I0729 10:23:01.572909   12698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:23:01.584523   12698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:23:01.589210   12698 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:23:01.589277   12698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:23:01.595205   12698 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:23:01.606920   12698 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:23:01.610964   12698 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 10:23:01.611009   12698 kubeadm.go:392] StartCluster: {Name:addons-342031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-342031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:23:01.611076   12698 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 10:23:01.611114   12698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 10:23:01.656041   12698 cri.go:89] found id: ""
	I0729 10:23:01.656108   12698 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:23:01.666845   12698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:23:01.677056   12698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:23:01.687121   12698 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:23:01.687144   12698 kubeadm.go:157] found existing configuration files:
	
	I0729 10:23:01.687197   12698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 10:23:01.696586   12698 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:23:01.696642   12698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:23:01.706741   12698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 10:23:01.716220   12698 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:23:01.716272   12698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:23:01.726527   12698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 10:23:01.736199   12698 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:23:01.736253   12698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:23:01.749229   12698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 10:23:01.759335   12698 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:23:01.759397   12698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:23:01.776945   12698 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:23:01.967268   12698 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:23:12.140583   12698 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 10:23:12.140651   12698 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:23:12.140737   12698 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:23:12.140851   12698 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:23:12.140931   12698 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:23:12.140982   12698 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:23:12.143333   12698 out.go:204]   - Generating certificates and keys ...
	I0729 10:23:12.143406   12698 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:23:12.143453   12698 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:23:12.143530   12698 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 10:23:12.143592   12698 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 10:23:12.143669   12698 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 10:23:12.143722   12698 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 10:23:12.143769   12698 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 10:23:12.143889   12698 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-342031 localhost] and IPs [192.168.39.224 127.0.0.1 ::1]
	I0729 10:23:12.143953   12698 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 10:23:12.144052   12698 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-342031 localhost] and IPs [192.168.39.224 127.0.0.1 ::1]
	I0729 10:23:12.144111   12698 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 10:23:12.144168   12698 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 10:23:12.144243   12698 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 10:23:12.144295   12698 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:23:12.144338   12698 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:23:12.144391   12698 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 10:23:12.144463   12698 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:23:12.144540   12698 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:23:12.144625   12698 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:23:12.144753   12698 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:23:12.144839   12698 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:23:12.146425   12698 out.go:204]   - Booting up control plane ...
	I0729 10:23:12.146500   12698 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:23:12.146575   12698 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:23:12.146663   12698 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:23:12.146807   12698 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:23:12.146893   12698 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:23:12.146953   12698 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:23:12.147111   12698 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 10:23:12.147186   12698 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 10:23:12.147268   12698 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.684726ms
	I0729 10:23:12.147373   12698 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 10:23:12.147435   12698 kubeadm.go:310] [api-check] The API server is healthy after 5.502151656s
	I0729 10:23:12.147574   12698 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:23:12.147690   12698 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:23:12.147735   12698 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:23:12.147879   12698 kubeadm.go:310] [mark-control-plane] Marking the node addons-342031 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:23:12.147926   12698 kubeadm.go:310] [bootstrap-token] Using token: smwj70.27f0grtxfr80dwmz
	I0729 10:23:12.149231   12698 out.go:204]   - Configuring RBAC rules ...
	I0729 10:23:12.149321   12698 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:23:12.149402   12698 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:23:12.149524   12698 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:23:12.149631   12698 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:23:12.149726   12698 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:23:12.149800   12698 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:23:12.149896   12698 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:23:12.149936   12698 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:23:12.149975   12698 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:23:12.149981   12698 kubeadm.go:310] 
	I0729 10:23:12.150029   12698 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:23:12.150035   12698 kubeadm.go:310] 
	I0729 10:23:12.150100   12698 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:23:12.150105   12698 kubeadm.go:310] 
	I0729 10:23:12.150134   12698 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:23:12.150213   12698 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:23:12.150286   12698 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:23:12.150296   12698 kubeadm.go:310] 
	I0729 10:23:12.150377   12698 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:23:12.150388   12698 kubeadm.go:310] 
	I0729 10:23:12.150456   12698 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:23:12.150465   12698 kubeadm.go:310] 
	I0729 10:23:12.150535   12698 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:23:12.150645   12698 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:23:12.150770   12698 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:23:12.150780   12698 kubeadm.go:310] 
	I0729 10:23:12.150907   12698 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:23:12.151005   12698 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:23:12.151012   12698 kubeadm.go:310] 
	I0729 10:23:12.151249   12698 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token smwj70.27f0grtxfr80dwmz \
	I0729 10:23:12.151369   12698 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 10:23:12.151397   12698 kubeadm.go:310] 	--control-plane 
	I0729 10:23:12.151404   12698 kubeadm.go:310] 
	I0729 10:23:12.151470   12698 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:23:12.151478   12698 kubeadm.go:310] 
	I0729 10:23:12.151556   12698 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token smwj70.27f0grtxfr80dwmz \
	I0729 10:23:12.151723   12698 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 10:23:12.151741   12698 cni.go:84] Creating CNI manager for ""
	I0729 10:23:12.151751   12698 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:23:12.154038   12698 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:23:12.155580   12698 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:23:12.167254   12698 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:23:12.186947   12698 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:23:12.186991   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:12.187040   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-342031 minikube.k8s.io/updated_at=2024_07_29T10_23_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=addons-342031 minikube.k8s.io/primary=true
	I0729 10:23:12.314257   12698 ops.go:34] apiserver oom_adj: -16
	I0729 10:23:12.327612   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:12.828102   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:13.327797   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:13.828497   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:14.327873   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:14.827777   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:15.327749   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:15.828518   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:16.327730   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:16.827722   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:17.328319   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:17.827752   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:18.327751   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:18.827714   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:19.328338   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:19.828572   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:20.328400   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:20.828491   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:21.327620   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:21.828511   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:22.328498   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:22.827985   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:23.327693   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:23.827857   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:24.327976   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:24.828269   12698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:23:24.947888   12698 kubeadm.go:1113] duration metric: took 12.760943739s to wait for elevateKubeSystemPrivileges
	I0729 10:23:24.947913   12698 kubeadm.go:394] duration metric: took 23.336907921s to StartCluster
	I0729 10:23:24.947928   12698 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:24.948028   12698 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:23:24.948409   12698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:24.948578   12698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 10:23:24.948600   12698 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:23:24.948667   12698 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 10:23:24.948752   12698 addons.go:69] Setting yakd=true in profile "addons-342031"
	I0729 10:23:24.948765   12698 config.go:182] Loaded profile config "addons-342031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:23:24.948778   12698 addons.go:69] Setting inspektor-gadget=true in profile "addons-342031"
	I0729 10:23:24.948794   12698 addons.go:234] Setting addon yakd=true in "addons-342031"
	I0729 10:23:24.948808   12698 addons.go:69] Setting volcano=true in profile "addons-342031"
	I0729 10:23:24.948816   12698 addons.go:234] Setting addon inspektor-gadget=true in "addons-342031"
	I0729 10:23:24.948831   12698 addons.go:234] Setting addon volcano=true in "addons-342031"
	I0729 10:23:24.948853   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.948855   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.948858   12698 addons.go:69] Setting cloud-spanner=true in profile "addons-342031"
	I0729 10:23:24.948862   12698 addons.go:69] Setting metrics-server=true in profile "addons-342031"
	I0729 10:23:24.948875   12698 addons.go:234] Setting addon cloud-spanner=true in "addons-342031"
	I0729 10:23:24.948883   12698 addons.go:234] Setting addon metrics-server=true in "addons-342031"
	I0729 10:23:24.948896   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.948816   12698 addons.go:69] Setting storage-provisioner=true in profile "addons-342031"
	I0729 10:23:24.948918   12698 addons.go:69] Setting gcp-auth=true in profile "addons-342031"
	I0729 10:23:24.948933   12698 mustload.go:65] Loading cluster: addons-342031
	I0729 10:23:24.948941   12698 addons.go:234] Setting addon storage-provisioner=true in "addons-342031"
	I0729 10:23:24.948974   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.949104   12698 config.go:182] Loaded profile config "addons-342031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:23:24.949280   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949283   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949301   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949304   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.949316   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.949394   12698 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-342031"
	I0729 10:23:24.949408   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949419   12698 addons.go:69] Setting default-storageclass=true in profile "addons-342031"
	I0729 10:23:24.949438   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.949447   12698 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-342031"
	I0729 10:23:24.949456   12698 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-342031"
	I0729 10:23:24.948853   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.949485   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.949542   12698 addons.go:69] Setting ingress=true in profile "addons-342031"
	I0729 10:23:24.949566   12698 addons.go:234] Setting addon ingress=true in "addons-342031"
	I0729 10:23:24.948839   12698 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-342031"
	I0729 10:23:24.949629   12698 addons.go:69] Setting helm-tiller=true in profile "addons-342031"
	I0729 10:23:24.949647   12698 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-342031"
	I0729 10:23:24.949655   12698 addons.go:234] Setting addon helm-tiller=true in "addons-342031"
	I0729 10:23:24.949675   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.949624   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.949762   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949779   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.949794   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949301   12698 addons.go:69] Setting volumesnapshots=true in profile "addons-342031"
	I0729 10:23:24.949833   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.949844   12698 addons.go:234] Setting addon volumesnapshots=true in "addons-342031"
	I0729 10:23:24.949866   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.950041   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950069   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950179   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950206   12698 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-342031"
	I0729 10:23:24.949675   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.950228   12698 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-342031"
	I0729 10:23:24.948909   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.950241   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950262   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950558   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.949412   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950573   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950585   12698 addons.go:69] Setting registry=true in profile "addons-342031"
	I0729 10:23:24.950591   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950605   12698 addons.go:234] Setting addon registry=true in "addons-342031"
	I0729 10:23:24.950610   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950632   12698 addons.go:69] Setting ingress-dns=true in profile "addons-342031"
	I0729 10:23:24.950651   12698 addons.go:234] Setting addon ingress-dns=true in "addons-342031"
	I0729 10:23:24.950658   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950670   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950558   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.950769   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950810   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950962   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.951507   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.951574   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.961635   12698 out.go:177] * Verifying Kubernetes components...
	I0729 10:23:24.950230   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.950593   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.962397   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:24.962784   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.962811   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.963522   12698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:23:24.970252   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0729 10:23:24.972091   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0729 10:23:24.972406   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.972714   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.973141   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.973158   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.973427   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.973444   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.973598   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.974268   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.974600   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.974626   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.974852   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.974900   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.979595   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I0729 10:23:24.980093   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.980637   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.980660   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.981028   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.981188   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:24.982833   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46805
	I0729 10:23:24.983150   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.983667   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.983682   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.984002   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.985017   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.985042   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.991170   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0729 10:23:24.992006   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0729 10:23:24.992240   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.992505   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:24.992983   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.993005   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.993346   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.993743   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:24.993766   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:24.994311   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.994367   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:24.994850   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:24.995381   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:24.995413   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.002250   12698 addons.go:234] Setting addon default-storageclass=true in "addons-342031"
	I0729 10:23:25.002293   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:25.002642   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.002668   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.005565   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0729 10:23:25.007184   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.007941   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.007967   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.008445   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.008696   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.011193   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.013230   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I0729 10:23:25.013767   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.013838   12698 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:23:25.014097   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45997
	I0729 10:23:25.014465   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.014717   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.014732   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.015310   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.015327   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.015590   12698 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:23:25.015605   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:23:25.015622   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.015971   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.016149   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.016399   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0729 10:23:25.016726   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.017596   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38503
	I0729 10:23:25.017815   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.017835   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.017900   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I0729 10:23:25.018243   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.018326   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.018525   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.018580   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.019054   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.019073   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.019496   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.019640   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.019654   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.020600   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.020625   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.020836   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.020899   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:25.021247   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.021276   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.021473   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.021493   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.021510   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.021856   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40947
	I0729 10:23:25.021991   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.022042   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.022256   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.022309   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.022503   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.022556   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.022584   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.022994   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.023304   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.023351   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.023656   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.024024   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.024040   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.024365   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.024834   12698 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0729 10:23:25.024905   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.024932   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.025153   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
	I0729 10:23:25.025300   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45107
	I0729 10:23:25.025729   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.026043   12698 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 10:23:25.026060   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 10:23:25.026080   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.026878   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.026900   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.027360   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.027850   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.027890   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.030825   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.031017   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0729 10:23:25.031245   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.031348   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.031371   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.031528   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.031744   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.031938   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.032106   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.032475   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.032491   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.032761   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.033346   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.033365   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.033749   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.033822   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.034395   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.034431   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.036243   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
	I0729 10:23:25.036592   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.037128   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.037149   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.038036   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.038304   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.039760   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.039800   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.042634   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0729 10:23:25.043273   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.043797   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.043826   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.044138   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.044341   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.046646   12698 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-342031"
	I0729 10:23:25.046688   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:25.047047   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.047081   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.047295   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0729 10:23:25.047833   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.048354   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.048369   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.048727   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.049263   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.049298   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.050334   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.050635   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:25.050650   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:25.050999   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:25.051015   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:25.051026   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:25.051038   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:25.051045   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:25.051276   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:25.051287   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 10:23:25.051368   12698 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 10:23:25.053978   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I0729 10:23:25.054406   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.054894   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.054918   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.055950   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.056454   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.056496   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.056680   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0729 10:23:25.057138   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.057697   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.057721   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.058112   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.064090   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0729 10:23:25.064520   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.064592   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.065739   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.065758   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.065987   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0729 10:23:25.066144   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I0729 10:23:25.066172   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.066455   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.066620   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.067165   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.067183   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.067593   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.067842   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.068064   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0729 10:23:25.068218   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.068955   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.069706   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.069724   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.070025   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.070251   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 10:23:25.070279   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.070349   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.070862   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.070951   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.072127   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.072144   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.072392   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 10:23:25.072449   12698 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 10:23:25.072456   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.072475   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 10:23:25.072593   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.073325   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0729 10:23:25.073738   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.074241   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.074253   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.074639   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.074805   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.074856   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.075784   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 10:23:25.075816   12698 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 10:23:25.075845   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 10:23:25.076328   12698 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 10:23:25.076349   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.077077   12698 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 10:23:25.077157   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.078115   12698 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 10:23:25.078266   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 10:23:25.078284   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.079124   12698 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 10:23:25.079140   12698 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 10:23:25.079167   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.079673   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 10:23:25.079865   12698 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 10:23:25.081107   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 10:23:25.081192   12698 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:23:25.082402   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0729 10:23:25.082903   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.082950   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.083446   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.083468   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.084127   12698 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:23:25.084185   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 10:23:25.084287   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.084425   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.084437   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.084514   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.084573   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I0729 10:23:25.084963   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.085045   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.085343   12698 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 10:23:25.085360   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 10:23:25.085376   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.085477   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.085644   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:25.085678   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:25.085984   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.086397   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.086426   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.086821   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.087096   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.087567   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.087917   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 10:23:25.088206   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.088466   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33211
	I0729 10:23:25.088848   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.088989   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.089429   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.089450   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.089586   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.089716   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.089776   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.089840   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.089852   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.090171   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.090201   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.090231   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.090279   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.090744   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.090762   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.091105   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.091259   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.091444   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.091752   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.092153   12698 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 10:23:25.092362   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35493
	I0729 10:23:25.092465   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0729 10:23:25.092576   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.092866   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.093215   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.093235   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.093471   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.093488   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.093505   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 10:23:25.093519   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 10:23:25.093534   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.093550   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.093579   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.093646   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.093804   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.093987   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.094024   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.094360   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.094382   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.094743   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.095352   12698 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 10:23:25.095587   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.095900   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.096668   12698 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 10:23:25.096683   12698 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 10:23:25.096700   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.097276   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40147
	I0729 10:23:25.097398   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.097574   12698 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 10:23:25.097702   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.097715   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.098245   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.098269   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.098624   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.098797   12698 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 10:23:25.098814   12698 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 10:23:25.098829   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.098839   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.099448   12698 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 10:23:25.099455   12698 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 10:23:25.100490   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.100959   12698 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 10:23:25.100973   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 10:23:25.100998   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.101096   12698 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 10:23:25.101109   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 10:23:25.101126   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.102119   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.102144   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.102174   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.102416   12698 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:23:25.102428   12698 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:23:25.102451   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.102635   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.102808   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.102967   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.103118   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.103128   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.103679   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.103714   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.104028   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0729 10:23:25.104191   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.104332   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.104403   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.104514   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.104819   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.105160   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.105177   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.105657   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.106201   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.106365   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.107474   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.107517   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.107546   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.107703   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.107856   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.107969   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.107999   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39869
	I0729 10:23:25.108023   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.108189   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.108404   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.108421   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.108456   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:25.108809   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.108931   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:25.108945   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:25.108984   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.109073   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.109100   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.109212   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.109448   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:25.109639   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:25.109667   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.109709   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.109723   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.109956   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.110115   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.110170   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.110194   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.110520   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.110745   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.110962   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.111131   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.111230   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:25.111306   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.111427   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:25.111867   12698 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 10:23:25.112661   12698 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W0729 10:23:25.113248   12698 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43098->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.113268   12698 retry.go:31] will retry after 258.973636ms: ssh: handshake failed: read tcp 192.168.39.1:43098->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.113457   12698 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 10:23:25.113471   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 10:23:25.113483   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.115120   12698 out.go:177]   - Using image docker.io/busybox:stable
	I0729 10:23:25.116226   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.116244   12698 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 10:23:25.116257   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 10:23:25.116276   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:25.116623   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.116647   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.116795   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.116948   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.117094   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.117209   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	W0729 10:23:25.117796   12698 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43114->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.117822   12698 retry.go:31] will retry after 323.118222ms: ssh: handshake failed: read tcp 192.168.39.1:43114->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.119350   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.119688   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:25.119706   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:25.119855   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:25.120006   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:25.120182   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:25.120345   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	W0729 10:23:25.125242   12698 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43116->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.125270   12698 retry.go:31] will retry after 291.440373ms: ssh: handshake failed: read tcp 192.168.39.1:43116->192.168.39.224:22: read: connection reset by peer
	I0729 10:23:25.410301   12698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:23:25.410376   12698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 10:23:25.416576   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 10:23:25.464840   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 10:23:25.467148   12698 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 10:23:25.467167   12698 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 10:23:25.480934   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 10:23:25.496094   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:23:25.516660   12698 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 10:23:25.516687   12698 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 10:23:25.550490   12698 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 10:23:25.550518   12698 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 10:23:25.552060   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 10:23:25.552080   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 10:23:25.552660   12698 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 10:23:25.552678   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 10:23:25.565482   12698 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 10:23:25.565506   12698 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 10:23:25.576825   12698 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 10:23:25.576851   12698 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 10:23:25.588908   12698 node_ready.go:35] waiting up to 6m0s for node "addons-342031" to be "Ready" ...
	I0729 10:23:25.592378   12698 node_ready.go:49] node "addons-342031" has status "Ready":"True"
	I0729 10:23:25.592421   12698 node_ready.go:38] duration metric: took 3.465275ms for node "addons-342031" to be "Ready" ...
	I0729 10:23:25.592433   12698 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:23:25.602574   12698 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:25.664460   12698 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 10:23:25.664481   12698 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 10:23:25.666566   12698 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 10:23:25.666586   12698 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 10:23:25.723005   12698 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 10:23:25.723025   12698 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 10:23:25.729702   12698 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 10:23:25.729719   12698 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 10:23:25.730275   12698 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 10:23:25.730296   12698 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 10:23:25.731938   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 10:23:25.731955   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 10:23:25.734928   12698 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 10:23:25.734950   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 10:23:25.750107   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:23:25.769528   12698 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 10:23:25.769554   12698 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 10:23:25.824578   12698 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 10:23:25.824606   12698 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 10:23:25.826663   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 10:23:25.836224   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 10:23:25.864953   12698 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 10:23:25.864978   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 10:23:25.950334   12698 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 10:23:25.950362   12698 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 10:23:25.952327   12698 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 10:23:25.952351   12698 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 10:23:26.026376   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 10:23:26.056901   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 10:23:26.056925   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 10:23:26.076619   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 10:23:26.092872   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 10:23:26.092899   12698 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 10:23:26.109409   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 10:23:26.155379   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 10:23:26.157925   12698 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 10:23:26.157948   12698 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 10:23:26.244219   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 10:23:26.244250   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 10:23:26.311853   12698 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:23:26.311889   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 10:23:26.393610   12698 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 10:23:26.393630   12698 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 10:23:26.408100   12698 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 10:23:26.408122   12698 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 10:23:26.651564   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:23:26.705981   12698 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 10:23:26.706008   12698 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 10:23:26.710174   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 10:23:26.710191   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 10:23:26.965108   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 10:23:26.965132   12698 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 10:23:27.071664   12698 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 10:23:27.071686   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 10:23:27.283675   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 10:23:27.283699   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 10:23:27.442953   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 10:23:27.616387   12698 pod_ready.go:102] pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace has status "Ready":"False"
	I0729 10:23:27.620969   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 10:23:27.620992   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 10:23:27.682527   12698 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 10:23:27.682553   12698 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 10:23:27.984364   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 10:23:28.099485   12698 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.689070943s)
	I0729 10:23:28.099517   12698 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 10:23:28.607314   12698 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-342031" context rescaled to 1 replicas
	I0729 10:23:29.787337   12698 pod_ready.go:102] pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace has status "Ready":"False"
	I0729 10:23:32.143200   12698 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 10:23:32.143236   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:32.146559   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:32.147055   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:32.147085   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:32.147278   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:32.147503   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:32.147651   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:32.147807   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:32.182879   12698 pod_ready.go:102] pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace has status "Ready":"False"
	I0729 10:23:32.614503   12698 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 10:23:32.616818   12698 pod_ready.go:92] pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:32.616845   12698 pod_ready.go:81] duration metric: took 7.014232912s for pod "coredns-7db6d8ff4d-7p4nt" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.616855   12698 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dpx74" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.623581   12698 pod_ready.go:92] pod "coredns-7db6d8ff4d-dpx74" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:32.623610   12698 pod_ready.go:81] duration metric: took 6.747033ms for pod "coredns-7db6d8ff4d-dpx74" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.623622   12698 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.630924   12698 pod_ready.go:92] pod "etcd-addons-342031" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:32.630944   12698 pod_ready.go:81] duration metric: took 7.314368ms for pod "etcd-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.630953   12698 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.644101   12698 pod_ready.go:92] pod "kube-apiserver-addons-342031" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:32.644131   12698 pod_ready.go:81] duration metric: took 13.170911ms for pod "kube-apiserver-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.644147   12698 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.663805   12698 pod_ready.go:92] pod "kube-controller-manager-addons-342031" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:32.663835   12698 pod_ready.go:81] duration metric: took 19.67932ms for pod "kube-controller-manager-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.663848   12698 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xxxfj" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:32.728469   12698 addons.go:234] Setting addon gcp-auth=true in "addons-342031"
	I0729 10:23:32.728520   12698 host.go:66] Checking if "addons-342031" exists ...
	I0729 10:23:32.728803   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:32.728832   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:32.743973   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0729 10:23:32.744371   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:32.745252   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:32.745273   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:32.745583   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:32.746043   12698 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:23:32.746068   12698 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:23:32.761424   12698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44019
	I0729 10:23:32.761843   12698 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:23:32.762388   12698 main.go:141] libmachine: Using API Version  1
	I0729 10:23:32.762414   12698 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:23:32.762812   12698 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:23:32.763051   12698 main.go:141] libmachine: (addons-342031) Calling .GetState
	I0729 10:23:32.764746   12698 main.go:141] libmachine: (addons-342031) Calling .DriverName
	I0729 10:23:32.764976   12698 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 10:23:32.765001   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHHostname
	I0729 10:23:32.767285   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:32.767686   12698 main.go:141] libmachine: (addons-342031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:46:e4", ip: ""} in network mk-addons-342031: {Iface:virbr1 ExpiryTime:2024-07-29 11:22:41 +0000 UTC Type:0 Mac:52:54:00:26:46:e4 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-342031 Clientid:01:52:54:00:26:46:e4}
	I0729 10:23:32.767717   12698 main.go:141] libmachine: (addons-342031) DBG | domain addons-342031 has defined IP address 192.168.39.224 and MAC address 52:54:00:26:46:e4 in network mk-addons-342031
	I0729 10:23:32.767881   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHPort
	I0729 10:23:32.768064   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHKeyPath
	I0729 10:23:32.768228   12698 main.go:141] libmachine: (addons-342031) Calling .GetSSHUsername
	I0729 10:23:32.768365   12698 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/addons-342031/id_rsa Username:docker}
	I0729 10:23:33.015459   12698 pod_ready.go:92] pod "kube-proxy-xxxfj" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:33.015494   12698 pod_ready.go:81] duration metric: took 351.637411ms for pod "kube-proxy-xxxfj" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:33.015508   12698 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:33.408100   12698 pod_ready.go:92] pod "kube-scheduler-addons-342031" in "kube-system" namespace has status "Ready":"True"
	I0729 10:23:33.408122   12698 pod_ready.go:81] duration metric: took 392.606722ms for pod "kube-scheduler-addons-342031" in "kube-system" namespace to be "Ready" ...
	I0729 10:23:33.408137   12698 pod_ready.go:38] duration metric: took 7.815685471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:23:33.408151   12698 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:23:33.408197   12698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:23:34.049408   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.632795039s)
	I0729 10:23:34.049439   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.584562853s)
	I0729 10:23:34.049473   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049475   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.568516268s)
	I0729 10:23:34.049484   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049496   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049472   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049557   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049581   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.299446941s)
	I0729 10:23:34.049605   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049614   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049536   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.553418805s)
	I0729 10:23:34.049681   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.22299479s)
	I0729 10:23:34.049687   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049695   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049704   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049716   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049830   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.213577349s)
	I0729 10:23:34.049948   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.049960   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.049543   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050066   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050090   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050106   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050155   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050163   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050171   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050178   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050225   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050244   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050250   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050257   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050265   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050326   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050332   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050326   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.023921952s)
	I0729 10:23:34.050344   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050352   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050356   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050361   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050365   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050399   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.973756437s)
	I0729 10:23:34.050421   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050423   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050429   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050431   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050442   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050449   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050509   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050517   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050693   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050727   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.050741   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.941295871s)
	I0729 10:23:34.050765   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.050770   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.050776   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.050781   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050786   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.051086   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.051771   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.051820   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.051840   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.051847   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.051856   12698 addons.go:475] Verifying addon ingress=true in "addons-342031"
	I0729 10:23:34.051964   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.051994   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.052000   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.052008   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.052014   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.053258   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.053288   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.053295   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.053303   12698 addons.go:475] Verifying addon registry=true in "addons-342031"
	I0729 10:23:34.053517   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.898102796s)
	I0729 10:23:34.053541   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.053556   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.050749   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.053614   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.053624   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.053632   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.053763   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.402166812s)
	W0729 10:23:34.053786   12698 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 10:23:34.053814   12698 retry.go:31] will retry after 372.256906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 10:23:34.053968   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.053982   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.053990   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.053997   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.054110   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054124   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054261   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.61126683s)
	I0729 10:23:34.054282   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.054298   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.054389   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054417   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054426   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054434   12698 addons.go:475] Verifying addon metrics-server=true in "addons-342031"
	I0729 10:23:34.054478   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054496   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054511   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054532   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054540   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054568   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054601   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054621   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054628   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054636   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.054638   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054641   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054651   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054661   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.054668   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.054667   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054642   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.054652   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054757   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.054781   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054789   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054796   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.054803   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.051936   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.054908   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.054918   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.054926   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.055242   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.055269   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.055275   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.055416   12698 out.go:177] * Verifying ingress addon...
	I0729 10:23:34.055568   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.055612   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.055619   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.055776   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.055825   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.055846   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.055984   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.056017   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.056024   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.056321   12698 out.go:177] * Verifying registry addon...
	I0729 10:23:34.059340   12698 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 10:23:34.059617   12698 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-342031 service yakd-dashboard -n yakd-dashboard
	
	I0729 10:23:34.060282   12698 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 10:23:34.073659   12698 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 10:23:34.073685   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:34.077292   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.077309   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.077640   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.077649   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.077663   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 10:23:34.077747   12698 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0729 10:23:34.083607   12698 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 10:23:34.083624   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:34.088586   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.088601   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.088896   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.088941   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.088952   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.426903   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:23:34.565685   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:34.565812   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:34.988298   12698 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.22329162s)
	I0729 10:23:34.988340   12698 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.580122307s)
	I0729 10:23:34.988371   12698 api_server.go:72] duration metric: took 10.039738841s to wait for apiserver process to appear ...
	I0729 10:23:34.988383   12698 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:23:34.988475   12698 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8443/healthz ...
	I0729 10:23:34.988381   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.003973469s)
	I0729 10:23:34.988646   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.988672   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.988925   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:34.988927   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.988958   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.988975   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:34.988984   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:34.989215   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:34.989232   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:34.989245   12698 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-342031"
	I0729 10:23:34.990131   12698 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 10:23:34.990963   12698 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 10:23:34.992639   12698 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:23:34.993195   12698 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 10:23:34.994345   12698 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 10:23:34.994364   12698 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 10:23:34.998412   12698 api_server.go:279] https://192.168.39.224:8443/healthz returned 200:
	ok
	I0729 10:23:34.999584   12698 api_server.go:141] control plane version: v1.30.3
	I0729 10:23:34.999604   12698 api_server.go:131] duration metric: took 11.148651ms to wait for apiserver health ...
	I0729 10:23:34.999612   12698 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 10:23:35.011026   12698 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 10:23:35.011046   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:35.048270   12698 system_pods.go:59] 19 kube-system pods found
	I0729 10:23:35.048301   12698 system_pods.go:61] "coredns-7db6d8ff4d-7p4nt" [bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c] Running
	I0729 10:23:35.048306   12698 system_pods.go:61] "coredns-7db6d8ff4d-dpx74" [756984e7-bcdb-4738-9d14-7a19eef1223d] Running
	I0729 10:23:35.048311   12698 system_pods.go:61] "csi-hostpath-attacher-0" [14d9045e-0ce7-4b4c-8e60-7b879be9ad87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 10:23:35.048316   12698 system_pods.go:61] "csi-hostpath-resizer-0" [37a95774-ce07-4299-994d-c54ded0fa6c1] Pending
	I0729 10:23:35.048322   12698 system_pods.go:61] "csi-hostpathplugin-sls2d" [2c6bd926-1f71-43e5-8c84-5c39a668606c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 10:23:35.048326   12698 system_pods.go:61] "etcd-addons-342031" [14b1d740-0700-442b-90f3-b806012a0848] Running
	I0729 10:23:35.048332   12698 system_pods.go:61] "kube-apiserver-addons-342031" [165f41bf-6de7-4b96-84e5-3a2f2ef072e5] Running
	I0729 10:23:35.048337   12698 system_pods.go:61] "kube-controller-manager-addons-342031" [41ac27d3-0ce0-4622-b164-f20afe162ee7] Running
	I0729 10:23:35.048344   12698 system_pods.go:61] "kube-ingress-dns-minikube" [a8913bd2-d23f-492c-bc4b-dd5175fff394] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 10:23:35.048347   12698 system_pods.go:61] "kube-proxy-xxxfj" [1a170716-f715-4335-95c7-88c60f42a91b] Running
	I0729 10:23:35.048351   12698 system_pods.go:61] "kube-scheduler-addons-342031" [3e16db13-65e8-4ffb-91d5-03f25c7883ad] Running
	I0729 10:23:35.048356   12698 system_pods.go:61] "metrics-server-c59844bb4-xpvk9" [b347f8e7-4e0d-4d6c-98f1-e2325cffef0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 10:23:35.048362   12698 system_pods.go:61] "nvidia-device-plugin-daemonset-hn9w7" [4ec41c4d-a5b9-4145-965a-16a2cc121387] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 10:23:35.048375   12698 system_pods.go:61] "registry-656c9c8d9c-t9mch" [c7896ca2-19fe-4e63-acf0-f820d1e54537] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 10:23:35.048382   12698 system_pods.go:61] "registry-proxy-vvvpt" [4854d6ef-fcb6-430d-aa34-fba27a2e4685] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 10:23:35.048388   12698 system_pods.go:61] "snapshot-controller-745499f584-dnhgq" [fd242e46-a424-46b6-89d2-f9d7d1827554] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:23:35.048394   12698 system_pods.go:61] "snapshot-controller-745499f584-jwdfc" [70e281a2-c081-4518-a5ea-e9d3f25724b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:23:35.048398   12698 system_pods.go:61] "storage-provisioner" [042331d6-ad1c-4aaa-b67e-152bd6e78507] Running
	I0729 10:23:35.048405   12698 system_pods.go:61] "tiller-deploy-6677d64bcd-j4zgl" [622a71ad-23e4-4ae3-bdce-fccd9e31b58c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 10:23:35.048411   12698 system_pods.go:74] duration metric: took 48.794213ms to wait for pod list to return data ...
	I0729 10:23:35.048421   12698 default_sa.go:34] waiting for default service account to be created ...
	I0729 10:23:35.056348   12698 default_sa.go:45] found service account: "default"
	I0729 10:23:35.056374   12698 default_sa.go:55] duration metric: took 7.947228ms for default service account to be created ...
	I0729 10:23:35.056382   12698 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 10:23:35.070856   12698 system_pods.go:86] 19 kube-system pods found
	I0729 10:23:35.070883   12698 system_pods.go:89] "coredns-7db6d8ff4d-7p4nt" [bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c] Running
	I0729 10:23:35.070889   12698 system_pods.go:89] "coredns-7db6d8ff4d-dpx74" [756984e7-bcdb-4738-9d14-7a19eef1223d] Running
	I0729 10:23:35.070896   12698 system_pods.go:89] "csi-hostpath-attacher-0" [14d9045e-0ce7-4b4c-8e60-7b879be9ad87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 10:23:35.070902   12698 system_pods.go:89] "csi-hostpath-resizer-0" [37a95774-ce07-4299-994d-c54ded0fa6c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 10:23:35.070911   12698 system_pods.go:89] "csi-hostpathplugin-sls2d" [2c6bd926-1f71-43e5-8c84-5c39a668606c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 10:23:35.070916   12698 system_pods.go:89] "etcd-addons-342031" [14b1d740-0700-442b-90f3-b806012a0848] Running
	I0729 10:23:35.070921   12698 system_pods.go:89] "kube-apiserver-addons-342031" [165f41bf-6de7-4b96-84e5-3a2f2ef072e5] Running
	I0729 10:23:35.070925   12698 system_pods.go:89] "kube-controller-manager-addons-342031" [41ac27d3-0ce0-4622-b164-f20afe162ee7] Running
	I0729 10:23:35.070931   12698 system_pods.go:89] "kube-ingress-dns-minikube" [a8913bd2-d23f-492c-bc4b-dd5175fff394] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 10:23:35.070935   12698 system_pods.go:89] "kube-proxy-xxxfj" [1a170716-f715-4335-95c7-88c60f42a91b] Running
	I0729 10:23:35.070939   12698 system_pods.go:89] "kube-scheduler-addons-342031" [3e16db13-65e8-4ffb-91d5-03f25c7883ad] Running
	I0729 10:23:35.070948   12698 system_pods.go:89] "metrics-server-c59844bb4-xpvk9" [b347f8e7-4e0d-4d6c-98f1-e2325cffef0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 10:23:35.070956   12698 system_pods.go:89] "nvidia-device-plugin-daemonset-hn9w7" [4ec41c4d-a5b9-4145-965a-16a2cc121387] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 10:23:35.070963   12698 system_pods.go:89] "registry-656c9c8d9c-t9mch" [c7896ca2-19fe-4e63-acf0-f820d1e54537] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 10:23:35.070968   12698 system_pods.go:89] "registry-proxy-vvvpt" [4854d6ef-fcb6-430d-aa34-fba27a2e4685] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 10:23:35.070975   12698 system_pods.go:89] "snapshot-controller-745499f584-dnhgq" [fd242e46-a424-46b6-89d2-f9d7d1827554] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:23:35.070981   12698 system_pods.go:89] "snapshot-controller-745499f584-jwdfc" [70e281a2-c081-4518-a5ea-e9d3f25724b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:23:35.070986   12698 system_pods.go:89] "storage-provisioner" [042331d6-ad1c-4aaa-b67e-152bd6e78507] Running
	I0729 10:23:35.070992   12698 system_pods.go:89] "tiller-deploy-6677d64bcd-j4zgl" [622a71ad-23e4-4ae3-bdce-fccd9e31b58c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 10:23:35.071001   12698 system_pods.go:126] duration metric: took 14.613711ms to wait for k8s-apps to be running ...
	I0729 10:23:35.071008   12698 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 10:23:35.071060   12698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:23:35.076588   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:35.076618   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:35.173173   12698 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 10:23:35.173195   12698 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 10:23:35.245428   12698 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 10:23:35.245456   12698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 10:23:35.357975   12698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 10:23:35.500259   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:35.563984   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:35.568626   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:35.999529   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:36.064061   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:36.066596   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:36.501168   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:36.565147   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:36.565555   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:36.704557   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.277592908s)
	I0729 10:23:36.704617   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:36.704624   12698 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.633539866s)
	I0729 10:23:36.704652   12698 system_svc.go:56] duration metric: took 1.633640078s WaitForService to wait for kubelet
	I0729 10:23:36.704676   12698 kubeadm.go:582] duration metric: took 11.756032442s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:23:36.704704   12698 node_conditions.go:102] verifying NodePressure condition ...
	I0729 10:23:36.704635   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:36.705135   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:36.705160   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:36.705171   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:36.705180   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:36.705193   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:36.705420   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:36.705455   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:36.705471   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:36.714958   12698 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:23:36.714985   12698 node_conditions.go:123] node cpu capacity is 2
	I0729 10:23:36.714995   12698 node_conditions.go:105] duration metric: took 10.282614ms to run NodePressure ...
	I0729 10:23:36.715005   12698 start.go:241] waiting for startup goroutines ...
	I0729 10:23:36.941652   12698 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.583643613s)
	I0729 10:23:36.941699   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:36.941716   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:36.941985   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:36.942019   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:36.942027   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:36.942042   12698 main.go:141] libmachine: Making call to close driver server
	I0729 10:23:36.942049   12698 main.go:141] libmachine: (addons-342031) Calling .Close
	I0729 10:23:36.942262   12698 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:23:36.942313   12698 main.go:141] libmachine: (addons-342031) DBG | Closing plugin on server side
	I0729 10:23:36.942339   12698 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:23:36.944327   12698 addons.go:475] Verifying addon gcp-auth=true in "addons-342031"
	I0729 10:23:36.946123   12698 out.go:177] * Verifying gcp-auth addon...
	I0729 10:23:36.948158   12698 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 10:23:36.984328   12698 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 10:23:36.984359   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:36.999759   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:37.063688   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:37.067414   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:37.452016   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:37.499529   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:37.617584   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:37.618083   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:37.952505   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:38.000149   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:38.065561   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:38.066692   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:38.452427   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:38.499671   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:38.563454   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:38.566038   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:38.952702   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:38.999037   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:39.064635   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:39.065565   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:39.452608   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:39.500471   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:39.565506   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:39.567479   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:39.951648   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:39.998487   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:40.063273   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:40.065959   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:40.453339   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:40.500094   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:40.565486   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:40.566435   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:40.952116   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:40.999715   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:41.065903   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:41.065920   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:41.600915   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:41.601095   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:41.601201   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:41.606414   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:41.952158   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:42.000195   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:42.064522   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:42.067068   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:42.452993   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:42.499455   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:42.565019   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:42.567383   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:42.952180   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:42.999994   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:43.066196   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:43.066829   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:43.452173   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:43.500272   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:43.564141   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:43.566066   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:43.952644   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:43.998570   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:44.063688   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:44.066659   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:44.452192   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:44.499823   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:44.564500   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:44.566654   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:44.952093   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:45.000191   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:45.064442   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:45.064951   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:45.452221   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:45.500323   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:45.563600   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:45.565236   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:45.951740   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:45.999031   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:46.064985   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:46.066003   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:46.453013   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:46.499905   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:46.565154   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:46.567169   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:46.952342   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:46.999979   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:47.065656   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:47.066458   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:47.451632   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:47.498937   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:47.565641   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:47.571491   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:47.952800   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:47.999527   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:48.065167   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:48.065336   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:48.452082   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:48.498940   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:48.564883   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:48.565179   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:48.951995   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:48.999568   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:49.064297   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:49.066817   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:49.452518   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:49.499469   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:49.563988   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:49.565540   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:49.952340   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:49.999755   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:50.065104   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:50.065536   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:50.452707   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:50.499507   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:50.564635   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:50.565699   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:50.952444   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:51.000668   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:51.064015   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:51.065459   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:51.452015   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:51.500432   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:51.563626   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:51.565038   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:51.951800   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:52.016543   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:52.073463   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:52.078712   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:52.452345   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:52.500189   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:52.564799   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:52.566172   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:52.951561   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:52.999552   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:53.063760   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:53.065469   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:53.451642   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:53.499066   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:53.577939   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:53.579136   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:54.112125   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:54.112255   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:54.112862   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:54.117174   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:54.451655   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:54.498313   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:54.565450   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:54.568339   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:54.952449   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:54.999294   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:55.067828   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:55.068536   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:55.452200   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:55.499442   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:55.563820   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:55.565499   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:55.951726   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:55.998611   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:56.064375   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:56.065427   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:56.451833   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:56.499000   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:56.563202   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:56.567351   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:56.952445   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:56.999222   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:57.063395   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:57.067543   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:57.451635   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:57.505014   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:57.573323   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:57.574301   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:57.951759   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:57.998798   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:58.071158   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:58.077478   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:58.453822   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:58.500771   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:58.569250   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:58.569759   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:58.951617   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:58.999045   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:59.066210   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:59.066788   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:59.451222   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:23:59.499282   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:23:59.565517   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:23:59.566613   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:23:59.953536   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:00.000754   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:00.063348   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:00.064558   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:00.452382   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:00.502387   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:00.563496   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:00.566531   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:00.951888   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:01.000600   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:01.064575   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:01.067404   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:01.451960   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:01.499288   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:01.564948   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:01.564956   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:01.951594   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:01.998303   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:02.063172   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:02.065790   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:02.452827   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:02.499208   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:02.563642   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:02.566624   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:02.952394   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:02.999826   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:03.064192   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:03.066282   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:03.452879   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:03.499960   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:03.564556   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:03.565719   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:03.952552   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:03.998684   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:04.063935   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:04.065496   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:04.462092   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:04.498998   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:04.564610   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:04.564715   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:04.951874   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:04.999392   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:05.065147   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:05.065575   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:05.452484   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:05.498653   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:05.564642   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:05.566583   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:05.952295   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:05.999559   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:06.064522   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:06.067275   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:06.452156   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:06.499838   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:06.564126   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:06.565959   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:06.951880   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:07.001277   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:07.063905   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:07.064994   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:07.452551   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:07.498590   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:07.564286   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:07.565398   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:07.951898   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:07.998995   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:08.065743   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:08.065998   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:08.451965   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:08.504204   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:08.578339   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:08.581885   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:08.952530   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:09.000249   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:09.064577   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:09.065458   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:09.452144   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:09.499261   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:09.567594   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:09.569376   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:09.952220   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:10.000372   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:10.065931   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:10.067176   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:10.451907   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:10.498976   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:10.565948   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:10.567184   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:10.952223   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:10.999311   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:11.066344   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:11.066471   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:11.451372   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:11.499258   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:11.564309   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:11.566879   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:11.954976   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:11.999218   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:12.065366   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:12.065859   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:12.452071   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:12.501676   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:12.564398   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:12.567702   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:12.952031   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:12.998893   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:13.067163   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:13.069555   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:13.451456   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:13.499186   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:13.563728   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:13.566664   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:13.952559   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:13.998539   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:14.063772   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:14.064785   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:14.452879   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:14.499118   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:14.563653   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:14.564863   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:14.953311   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:15.007426   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:15.064503   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:15.066250   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:15.453280   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:15.499464   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:15.563677   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:15.564973   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:15.952357   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:15.999680   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:16.067307   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:16.068742   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:16.452464   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:16.498740   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:16.564769   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:16.564915   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:16.952450   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:17.000043   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:17.065031   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:17.065372   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:17.451584   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:17.498643   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:17.564549   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:17.566103   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:17.952547   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:17.998827   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:18.063941   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:18.065203   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:18.452041   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:18.499376   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:18.564040   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:18.566358   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:18.952182   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:18.999907   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:19.064998   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:19.066671   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:19.452250   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:19.499873   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:19.569963   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:19.582200   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:19.952408   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:19.999569   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:20.063458   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:20.066407   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:20.726177   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:20.728278   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:20.731000   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:20.732932   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:20.952392   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:21.001765   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:21.064088   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:21.064824   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:21.452914   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:21.499830   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:21.563599   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:21.565795   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:21.954833   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:21.998351   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:22.064901   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:22.065560   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:22.451814   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:22.498621   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:22.564088   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:22.567339   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:22.951869   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:23.001660   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:23.065495   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:23.065662   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:23.451797   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:23.498767   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:23.563583   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:23.565484   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:23.952034   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:23.998752   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:24.063708   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:24.064636   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:24.452384   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:24.500618   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:24.563656   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:24.567700   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:24:24.951933   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:24.999389   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:25.064269   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:25.065815   12698 kapi.go:107] duration metric: took 51.005539544s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 10:24:25.452779   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:25.499475   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:25.565331   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:25.952546   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:25.998349   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:26.063767   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:26.451683   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:26.499339   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:26.563337   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:26.951363   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:27.021757   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:27.071750   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:27.452193   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:27.501424   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:27.564259   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:27.952619   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:27.998963   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:28.076451   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:28.453142   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:28.499441   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:28.563217   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:28.952232   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:28.999084   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:29.064479   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:29.451426   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:29.499310   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:29.563734   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:29.956228   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:29.998993   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:30.064633   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:30.452772   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:30.499727   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:30.569479   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:30.952500   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:31.001564   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:31.063775   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:31.451762   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:31.500144   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:31.569891   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:31.953386   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:32.001601   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:32.064090   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:32.452278   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:32.499534   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:32.564215   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:32.954747   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:32.998443   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:33.063476   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:33.451640   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:33.499813   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:33.563473   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:33.952725   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:33.998798   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:34.064867   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:34.451261   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:34.499470   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:34.564234   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:34.952067   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:34.999183   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:35.064073   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:35.453150   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:35.500025   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:35.570321   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:35.952055   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:36.001627   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:36.063666   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:36.452143   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:36.499088   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:36.564906   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:36.953864   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:36.999311   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:37.064497   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:37.452876   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:37.500255   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:37.563643   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:37.951826   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:38.001460   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:38.074550   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:38.452773   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:38.499741   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:38.563964   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:38.954669   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:38.998002   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:39.064070   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:39.675869   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:39.676935   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:39.679935   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:39.953276   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:40.007884   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:40.074086   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:40.452297   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:40.504026   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:40.564417   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:40.952674   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:40.999006   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:41.064136   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:41.452104   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:41.499252   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:41.564210   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:41.952573   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:42.000124   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:42.064485   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:42.453090   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:42.500119   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:42.565465   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:42.952736   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:42.999215   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:43.063572   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:43.451431   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:43.498530   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:43.564156   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:43.959549   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:43.999654   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:44.064673   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:44.452078   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:44.501626   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:44.565560   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:44.953675   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:45.003407   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:45.063989   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:45.452060   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:45.498565   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:45.564096   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:45.952371   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:45.999590   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:46.064405   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:46.457352   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:46.499698   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:46.566785   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:47.046201   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:47.046833   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:47.065424   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:47.452520   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:47.499254   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:47.563329   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:47.952248   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:47.999316   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:48.063529   12698 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:24:48.451292   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:48.499520   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:48.563464   12698 kapi.go:107] duration metric: took 1m14.504123177s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 10:24:48.951582   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:48.999039   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:49.451972   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:49.500106   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:49.952271   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:50.000194   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:50.453769   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:50.500630   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:50.952649   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:50.998605   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:51.451390   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:51.500161   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:51.951319   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:51.999328   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:52.457663   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:52.498444   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:52.952827   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:24:52.999468   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:53.452778   12698 kapi.go:107] duration metric: took 1m16.504620091s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 10:24:53.454373   12698 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-342031 cluster.
	I0729 10:24:53.455768   12698 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 10:24:53.457092   12698 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 10:24:53.499033   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:53.999761   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:54.499986   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:54.999680   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:55.499953   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:56.002851   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:56.502598   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:56.999758   12698 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:24:57.499588   12698 kapi.go:107] duration metric: took 1m22.50638969s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 10:24:57.501611   12698 out.go:177] * Enabled addons: helm-tiller, metrics-server, nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0729 10:24:57.503023   12698 addons.go:510] duration metric: took 1m32.554350492s for enable addons: enabled=[helm-tiller metrics-server nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0729 10:24:57.503070   12698 start.go:246] waiting for cluster config update ...
	I0729 10:24:57.503091   12698 start.go:255] writing updated cluster config ...
	I0729 10:24:57.503363   12698 ssh_runner.go:195] Run: rm -f paused
	I0729 10:24:57.571903   12698 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 10:24:57.573560   12698 out.go:177] * Done! kubectl is now configured to use "addons-342031" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.764928652Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f479602e-e4fb-48a9-b44a-7a206a409cf8 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.765246833Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722248585916168118,StartedAt:1722248585995050398,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 411a1f9fbbb593cb4784c6aa4055be52,},Annotations:map[string]string{io.kubernetes.container.hash: 700e155f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/411a1f9fbbb593cb4784c6aa4055be52/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/411a1f9fbbb593cb4784c6aa4055be52/containers/kube-apiserver/b03afb27,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/
var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-342031_411a1f9fbbb593cb4784c6aa4055be52/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f479602e-e4fb-48a9-b44a-7a206a409cf8 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.776140872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6b146bb-2bce-435e-b9af-97e6851bc917 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.776234854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6b146bb-2bce-435e-b9af-97e6851bc917 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.777481328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ce5f698-47f4-49e8-aad4-015aa8ecc883 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.779138278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722249091779097943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589533,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ce5f698-47f4-49e8-aad4-015aa8ecc883 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.779656310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43a8ae2a-1bff-4f35-854b-e36075141502 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.779711364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43a8ae2a-1bff-4f35-854b-e36075141502 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.779988057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:363f70a97eb9283415a8bef3ca142364d0fed767451822e183cf12167396c2f3,PodSandboxId:6e31ec3673380c4ea87bb28a0554ab05502f1e69e334aa12fd5617262d826fb5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722248891193219804,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-9tvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e44723-d4c5-4175-ba9d-52cc5bf7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 28bad8c3,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774561bc7f43d2b846f5a26ce56a79d16e4380ba1d072dfc07dbfc9988951499,PodSandboxId:443c3d1fa171154089d6a79196f3a4a0c23cc287fd12ef9684c89a694c194783,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722248748958202872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2af2b59-c59f-4341-a2d3-88a65f799b1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 4b5c6f8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13d8d7564bcf3beb1f7d3536172bdce636bb39c681e8d98755a58067cb39a2,PodSandboxId:2caaae57648c154553576912f8a1a41c7006b108eb5b4d4733b6ad9096268810,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722248737818236102,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-vd7js,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7d782200-b333-4563-9bc5-012f43886495,},Annotations:map[string]string{io.kubernetes.container.hash: 848e50a3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0856d93df5fcf9fc505cdd5e9bf0eba0403ddcf2c48b601b40f709b26c1d7ff0,PodSandboxId:2db77a19fbd5db1e083599871468850327f59b6522e2d09bf175a0c8c76d2bbc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722248701566267530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92313c3e-4d9c-446b-9eba-48bd5781c42a,},Annotations:map[string]string{io.kubernetes.container.hash: 13cf5796,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db444daa29ef8eff8416f506ef3056e258a3455171f0a13203591ceb6af8462,PodSandboxId:b90f90fd9780d6dbf1d4086b7924853e4de7596ea2749d96a5ed0a01ad46e129,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722248667589778957,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2n787,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 20d03050-1a5f-4ffa-86b5-7dbc81463b05,},Annotations:map[string]string{io.kubernetes.container.hash: 37889472,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b34a25af319ebefaa689aa8887452cc2a39add767701722a8d9403d79a6763,PodSandboxId:39eeb4c6bc65327bf4b3ccf9558b00b7f1112ad6edb33b5f1a0cb18548637565,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722248640283925
160,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xpvk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b347f8e7-4e0d-4d6c-98f1-e2325cffef0e,},Annotations:map[string]string{io.kubernetes.container.hash: e219e715,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac166beb18a8bae38259ddef3712206a2bfee3cd8d4af1a79fc3569021ed8d,PodSandboxId:c15c19ee977c365277b59d5a1056e978c62f066b6a54f6fafddbab4fbf5678e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722248611446925625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042331d6-ad1c-4aaa-b67e-152bd6e78507,},Annotations:map[string]string{io.kubernetes.container.hash: 1c104067,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214389dc390da290fe0be44304d2e4275de79963a4347c2da192aec4a043cc88,PodSandboxId:26e3990cc3f2282c2acd5839ddbdc0afceb1618249fabcb0a51122a8c8ce42a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722248609618737147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7p4nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9f966ded,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6a2c9fa4cd5b1c22709372448330cee907903e60cbb1304fce9e5be57abe23,PodSandboxId:aa07550982de9a66384147147b14acef0b54eada1de00c14c08d2e23252dfc40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722248607009896480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxxfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a170716-f715-4335-95c7-88c60f42a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 526e7c73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a975bababdfcab83d90b0d23d59977dfb34bedf202f7efbeae2397ee24cfda9,PodSandboxId:8ce6c1a682d5cade6b6719285ad124fc4fb0d79a622388110734040114988f4f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487
248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722248585958740064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9518d19b725d3da239da5734075da82a,},Annotations:map[string]string{io.kubernetes.container.hash: a7bb2e4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d87bbdda87a505f694af02bd69a0acedf24df447acc92bead4a95f26817026a,PodSandboxId:a3eeabf960faedb8ce22b13aa24b81ffef325af6b7533c2435098e6ae2d7d631,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722248585912328477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a36bc1144e5af022ac844378fcd3642e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f49233f8bdc242e81989422ff49ab64d41bea91e75a5e902c38498c8aa34291,PodSandboxId:4c00e4633b495221bf8e40cfd00c340fe1a68e913a1ffdfffce8b0393c9789cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722248585908238289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ebbb59e746d97a13eaae9907ddaef2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731,PodSandboxId:519bc6c1575ee50ea4d1021a124af96bdb9775fcc39719b307f76cf96000b4ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722248585840188170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 411a1f9fbbb593cb4784c6aa4055be52,},Annotations:map[string]string{io.kubernetes.container.hash: 700e155f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43a8ae2a-1bff-4f35-854b-e36075141502 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.821283925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3659860b-4613-4b59-86c4-ac2c4cba2fda name=/runtime.v1.RuntimeService/Version
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.821360063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3659860b-4613-4b59-86c4-ac2c4cba2fda name=/runtime.v1.RuntimeService/Version
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.824235365Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07a0a6f5-d655-4c7e-9370-b8c68843a4c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.825940648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722249091825912182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589533,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07a0a6f5-d655-4c7e-9370-b8c68843a4c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.826716580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b8ca6d6-177c-405b-9140-697552f83c65 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.826770026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b8ca6d6-177c-405b-9140-697552f83c65 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.827025072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:363f70a97eb9283415a8bef3ca142364d0fed767451822e183cf12167396c2f3,PodSandboxId:6e31ec3673380c4ea87bb28a0554ab05502f1e69e334aa12fd5617262d826fb5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722248891193219804,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-9tvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e44723-d4c5-4175-ba9d-52cc5bf7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 28bad8c3,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774561bc7f43d2b846f5a26ce56a79d16e4380ba1d072dfc07dbfc9988951499,PodSandboxId:443c3d1fa171154089d6a79196f3a4a0c23cc287fd12ef9684c89a694c194783,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722248748958202872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2af2b59-c59f-4341-a2d3-88a65f799b1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 4b5c6f8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13d8d7564bcf3beb1f7d3536172bdce636bb39c681e8d98755a58067cb39a2,PodSandboxId:2caaae57648c154553576912f8a1a41c7006b108eb5b4d4733b6ad9096268810,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722248737818236102,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-vd7js,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7d782200-b333-4563-9bc5-012f43886495,},Annotations:map[string]string{io.kubernetes.container.hash: 848e50a3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0856d93df5fcf9fc505cdd5e9bf0eba0403ddcf2c48b601b40f709b26c1d7ff0,PodSandboxId:2db77a19fbd5db1e083599871468850327f59b6522e2d09bf175a0c8c76d2bbc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722248701566267530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92313c3e-4d9c-446b-9eba-48bd5781c42a,},Annotations:map[string]string{io.kubernetes.container.hash: 13cf5796,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db444daa29ef8eff8416f506ef3056e258a3455171f0a13203591ceb6af8462,PodSandboxId:b90f90fd9780d6dbf1d4086b7924853e4de7596ea2749d96a5ed0a01ad46e129,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722248667589778957,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2n787,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 20d03050-1a5f-4ffa-86b5-7dbc81463b05,},Annotations:map[string]string{io.kubernetes.container.hash: 37889472,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b34a25af319ebefaa689aa8887452cc2a39add767701722a8d9403d79a6763,PodSandboxId:39eeb4c6bc65327bf4b3ccf9558b00b7f1112ad6edb33b5f1a0cb18548637565,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722248640283925
160,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xpvk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b347f8e7-4e0d-4d6c-98f1-e2325cffef0e,},Annotations:map[string]string{io.kubernetes.container.hash: e219e715,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac166beb18a8bae38259ddef3712206a2bfee3cd8d4af1a79fc3569021ed8d,PodSandboxId:c15c19ee977c365277b59d5a1056e978c62f066b6a54f6fafddbab4fbf5678e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722248611446925625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042331d6-ad1c-4aaa-b67e-152bd6e78507,},Annotations:map[string]string{io.kubernetes.container.hash: 1c104067,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214389dc390da290fe0be44304d2e4275de79963a4347c2da192aec4a043cc88,PodSandboxId:26e3990cc3f2282c2acd5839ddbdc0afceb1618249fabcb0a51122a8c8ce42a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722248609618737147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7p4nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9f966ded,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6a2c9fa4cd5b1c22709372448330cee907903e60cbb1304fce9e5be57abe23,PodSandboxId:aa07550982de9a66384147147b14acef0b54eada1de00c14c08d2e23252dfc40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722248607009896480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxxfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a170716-f715-4335-95c7-88c60f42a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 526e7c73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a975bababdfcab83d90b0d23d59977dfb34bedf202f7efbeae2397ee24cfda9,PodSandboxId:8ce6c1a682d5cade6b6719285ad124fc4fb0d79a622388110734040114988f4f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487
248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722248585958740064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9518d19b725d3da239da5734075da82a,},Annotations:map[string]string{io.kubernetes.container.hash: a7bb2e4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d87bbdda87a505f694af02bd69a0acedf24df447acc92bead4a95f26817026a,PodSandboxId:a3eeabf960faedb8ce22b13aa24b81ffef325af6b7533c2435098e6ae2d7d631,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722248585912328477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a36bc1144e5af022ac844378fcd3642e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f49233f8bdc242e81989422ff49ab64d41bea91e75a5e902c38498c8aa34291,PodSandboxId:4c00e4633b495221bf8e40cfd00c340fe1a68e913a1ffdfffce8b0393c9789cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722248585908238289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ebbb59e746d97a13eaae9907ddaef2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731,PodSandboxId:519bc6c1575ee50ea4d1021a124af96bdb9775fcc39719b307f76cf96000b4ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722248585840188170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 411a1f9fbbb593cb4784c6aa4055be52,},Annotations:map[string]string{io.kubernetes.container.hash: 700e155f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b8ca6d6-177c-405b-9140-697552f83c65 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.863452619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d6626b9-5b30-46bd-85c5-b8e6685ffed7 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.863593496Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d6626b9-5b30-46bd-85c5-b8e6685ffed7 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.864809920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63d90280-dd18-404c-ab4d-7927e55bdb27 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.866739270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722249091866709154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589533,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63d90280-dd18-404c-ab4d-7927e55bdb27 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.867613023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e880dacc-68b9-4a7e-8165-7fa99fdedcb0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.867689127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e880dacc-68b9-4a7e-8165-7fa99fdedcb0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.868037045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:363f70a97eb9283415a8bef3ca142364d0fed767451822e183cf12167396c2f3,PodSandboxId:6e31ec3673380c4ea87bb28a0554ab05502f1e69e334aa12fd5617262d826fb5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722248891193219804,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-9tvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e44723-d4c5-4175-ba9d-52cc5bf7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 28bad8c3,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774561bc7f43d2b846f5a26ce56a79d16e4380ba1d072dfc07dbfc9988951499,PodSandboxId:443c3d1fa171154089d6a79196f3a4a0c23cc287fd12ef9684c89a694c194783,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722248748958202872,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2af2b59-c59f-4341-a2d3-88a65f799b1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 4b5c6f8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13d8d7564bcf3beb1f7d3536172bdce636bb39c681e8d98755a58067cb39a2,PodSandboxId:2caaae57648c154553576912f8a1a41c7006b108eb5b4d4733b6ad9096268810,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722248737818236102,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-vd7js,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7d782200-b333-4563-9bc5-012f43886495,},Annotations:map[string]string{io.kubernetes.container.hash: 848e50a3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0856d93df5fcf9fc505cdd5e9bf0eba0403ddcf2c48b601b40f709b26c1d7ff0,PodSandboxId:2db77a19fbd5db1e083599871468850327f59b6522e2d09bf175a0c8c76d2bbc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722248701566267530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92313c3e-4d9c-446b-9eba-48bd5781c42a,},Annotations:map[string]string{io.kubernetes.container.hash: 13cf5796,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db444daa29ef8eff8416f506ef3056e258a3455171f0a13203591ceb6af8462,PodSandboxId:b90f90fd9780d6dbf1d4086b7924853e4de7596ea2749d96a5ed0a01ad46e129,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722248667589778957,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2n787,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 20d03050-1a5f-4ffa-86b5-7dbc81463b05,},Annotations:map[string]string{io.kubernetes.container.hash: 37889472,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b34a25af319ebefaa689aa8887452cc2a39add767701722a8d9403d79a6763,PodSandboxId:39eeb4c6bc65327bf4b3ccf9558b00b7f1112ad6edb33b5f1a0cb18548637565,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722248640283925
160,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xpvk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b347f8e7-4e0d-4d6c-98f1-e2325cffef0e,},Annotations:map[string]string{io.kubernetes.container.hash: e219e715,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ac166beb18a8bae38259ddef3712206a2bfee3cd8d4af1a79fc3569021ed8d,PodSandboxId:c15c19ee977c365277b59d5a1056e978c62f066b6a54f6fafddbab4fbf5678e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722248611446925625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042331d6-ad1c-4aaa-b67e-152bd6e78507,},Annotations:map[string]string{io.kubernetes.container.hash: 1c104067,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214389dc390da290fe0be44304d2e4275de79963a4347c2da192aec4a043cc88,PodSandboxId:26e3990cc3f2282c2acd5839ddbdc0afceb1618249fabcb0a51122a8c8ce42a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722248609618737147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7p4nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde1ef2e-ae49-44a4-a83c-fe1d0cf4fe4c,},Annotations:map[string]string{io.kubernetes.container.hash: 9f966ded,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6a2c9fa4cd5b1c22709372448330cee907903e60cbb1304fce9e5be57abe23,PodSandboxId:aa07550982de9a66384147147b14acef0b54eada1de00c14c08d2e23252dfc40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722248607009896480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xxxfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a170716-f715-4335-95c7-88c60f42a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 526e7c73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a975bababdfcab83d90b0d23d59977dfb34bedf202f7efbeae2397ee24cfda9,PodSandboxId:8ce6c1a682d5cade6b6719285ad124fc4fb0d79a622388110734040114988f4f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487
248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722248585958740064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9518d19b725d3da239da5734075da82a,},Annotations:map[string]string{io.kubernetes.container.hash: a7bb2e4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d87bbdda87a505f694af02bd69a0acedf24df447acc92bead4a95f26817026a,PodSandboxId:a3eeabf960faedb8ce22b13aa24b81ffef325af6b7533c2435098e6ae2d7d631,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722248585912328477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a36bc1144e5af022ac844378fcd3642e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f49233f8bdc242e81989422ff49ab64d41bea91e75a5e902c38498c8aa34291,PodSandboxId:4c00e4633b495221bf8e40cfd00c340fe1a68e913a1ffdfffce8b0393c9789cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722248585908238289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3ebbb59e746d97a13eaae9907ddaef2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731,PodSandboxId:519bc6c1575ee50ea4d1021a124af96bdb9775fcc39719b307f76cf96000b4ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722248585840188170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-342031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 411a1f9fbbb593cb4784c6aa4055be52,},Annotations:map[string]string{io.kubernetes.container.hash: 700e155f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e880dacc-68b9-4a7e-8165-7fa99fdedcb0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.871989005Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=4284f59d-ae20-4150-87dd-d7ab09a0633e name=/runtime.v1.RuntimeService/Status
	Jul 29 10:31:31 addons-342031 crio[678]: time="2024-07-29 10:31:31.872049839Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=4284f59d-ae20-4150-87dd-d7ab09a0633e name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	363f70a97eb92       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   6e31ec3673380       hello-world-app-6778b5fc9f-9tvlv
	774561bc7f43d       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   443c3d1fa1711       nginx
	3a13d8d7564bc       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   2caaae57648c1       headlamp-7867546754-vd7js
	0856d93df5fcf       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   2db77a19fbd5d       busybox
	9db444daa29ef       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   b90f90fd9780d       local-path-provisioner-8d985888d-2n787
	c9b34a25af319       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   39eeb4c6bc653       metrics-server-c59844bb4-xpvk9
	49ac166beb18a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   c15c19ee977c3       storage-provisioner
	214389dc390da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   26e3990cc3f22       coredns-7db6d8ff4d-7p4nt
	6a6a2c9fa4cd5       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        8 minutes ago       Running             kube-proxy                0                   aa07550982de9       kube-proxy-xxxfj
	1a975bababdfc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   8ce6c1a682d5c       etcd-addons-342031
	7d87bbdda87a5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        8 minutes ago       Running             kube-scheduler            0                   a3eeabf960fae       kube-scheduler-addons-342031
	4f49233f8bdc2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        8 minutes ago       Running             kube-controller-manager   0                   4c00e4633b495       kube-controller-manager-addons-342031
	665880b8788e6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        8 minutes ago       Running             kube-apiserver            0                   519bc6c1575ee       kube-apiserver-addons-342031
	
	
	==> coredns [214389dc390da290fe0be44304d2e4275de79963a4347c2da192aec4a043cc88] <==
	[INFO] 10.244.0.8:44754 - 33077 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000294331s
	[INFO] 10.244.0.8:35406 - 31702 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058837s
	[INFO] 10.244.0.8:35406 - 32424 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108327s
	[INFO] 10.244.0.8:56263 - 19589 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073544s
	[INFO] 10.244.0.8:56263 - 31367 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091648s
	[INFO] 10.244.0.8:46604 - 8653 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000057047s
	[INFO] 10.244.0.8:46604 - 35023 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000040978s
	[INFO] 10.244.0.8:60575 - 8058 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151614s
	[INFO] 10.244.0.8:60575 - 28743 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000030657s
	[INFO] 10.244.0.8:50808 - 6850 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005485s
	[INFO] 10.244.0.8:50808 - 26564 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030319s
	[INFO] 10.244.0.8:39296 - 43933 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045647s
	[INFO] 10.244.0.8:39296 - 47263 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000023053s
	[INFO] 10.244.0.8:37039 - 38574 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000042689s
	[INFO] 10.244.0.8:37039 - 23456 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096291s
	[INFO] 10.244.0.22:47887 - 6552 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000453705s
	[INFO] 10.244.0.22:59406 - 36777 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151571s
	[INFO] 10.244.0.22:51100 - 13843 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108325s
	[INFO] 10.244.0.22:33937 - 16849 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000077502s
	[INFO] 10.244.0.22:33724 - 1871 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00017102s
	[INFO] 10.244.0.22:37650 - 6852 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000174764s
	[INFO] 10.244.0.22:55938 - 30152 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00119209s
	[INFO] 10.244.0.22:39715 - 63188 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001474916s
	[INFO] 10.244.0.26:54234 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000448092s
	[INFO] 10.244.0.26:44655 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148009s
	
	
	==> describe nodes <==
	Name:               addons-342031
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-342031
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=addons-342031
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T10_23_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-342031
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:23:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-342031
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:31:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:28:17 +0000   Mon, 29 Jul 2024 10:23:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:28:17 +0000   Mon, 29 Jul 2024 10:23:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:28:17 +0000   Mon, 29 Jul 2024 10:23:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:28:17 +0000   Mon, 29 Jul 2024 10:23:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    addons-342031
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef64a556b04d4dc1b1de2f3ff74bb9cb
	  System UUID:                ef64a556-b04d-4dc1-b1de-2f3ff74bb9cb
	  Boot ID:                    d471fc6e-08fb-4c3c-ab9d-1544ab7820e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  default                     hello-world-app-6778b5fc9f-9tvlv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  headlamp                    headlamp-7867546754-vd7js                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 coredns-7db6d8ff4d-7p4nt                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m8s
	  kube-system                 etcd-addons-342031                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-apiserver-addons-342031              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-controller-manager-addons-342031     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-proxy-xxxfj                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 kube-scheduler-addons-342031              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 metrics-server-c59844bb4-xpvk9            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         8m2s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  local-path-storage          local-path-provisioner-8d985888d-2n787    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m27s (x8 over 8m27s)  kubelet          Node addons-342031 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s (x8 over 8m27s)  kubelet          Node addons-342031 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s (x7 over 8m27s)  kubelet          Node addons-342031 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m21s                  kubelet          Node addons-342031 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m21s                  kubelet          Node addons-342031 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m21s                  kubelet          Node addons-342031 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m20s                  kubelet          Node addons-342031 status is now: NodeReady
	  Normal  RegisteredNode           8m8s                   node-controller  Node addons-342031 event: Registered Node addons-342031 in Controller
	
	
	==> dmesg <==
	[  +5.197569] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.011517] kauditd_printk_skb: 145 callbacks suppressed
	[  +8.338938] kauditd_printk_skb: 71 callbacks suppressed
	[Jul29 10:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.222823] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.511401] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.696308] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.313767] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.147380] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.164374] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.682264] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.240566] kauditd_printk_skb: 7 callbacks suppressed
	[Jul29 10:25] kauditd_printk_skb: 45 callbacks suppressed
	[  +7.107081] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.302284] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.131048] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.026927] kauditd_printk_skb: 94 callbacks suppressed
	[  +5.099838] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.038833] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.228653] kauditd_printk_skb: 8 callbacks suppressed
	[Jul29 10:26] kauditd_printk_skb: 2 callbacks suppressed
	[ +24.342837] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.264022] kauditd_printk_skb: 33 callbacks suppressed
	[Jul29 10:28] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.357462] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [1a975bababdfcab83d90b0d23d59977dfb34bedf202f7efbeae2397ee24cfda9] <==
	{"level":"warn","ts":"2024-07-29T10:24:39.661937Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.636967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11625"}
	{"level":"info","ts":"2024-07-29T10:24:39.661981Z","caller":"traceutil/trace.go:171","msg":"trace[1368386628] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1093; }","duration":"220.708184ms","start":"2024-07-29T10:24:39.441266Z","end":"2024-07-29T10:24:39.661974Z","steps":["trace[1368386628] 'agreement among raft nodes before linearized reading'  (duration: 220.592572ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:24:39.662917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.533831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-29T10:24:39.663436Z","caller":"traceutil/trace.go:171","msg":"trace[873553489] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1093; }","duration":"112.075064ms","start":"2024-07-29T10:24:39.551352Z","end":"2024-07-29T10:24:39.663427Z","steps":["trace[873553489] 'agreement among raft nodes before linearized reading'  (duration: 111.476513ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:24:39.666234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.997887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85649"}
	{"level":"info","ts":"2024-07-29T10:24:39.666286Z","caller":"traceutil/trace.go:171","msg":"trace[359601509] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1093; }","duration":"180.084813ms","start":"2024-07-29T10:24:39.486193Z","end":"2024-07-29T10:24:39.666278Z","steps":["trace[359601509] 'agreement among raft nodes before linearized reading'  (duration: 176.206598ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T10:24:47.030978Z","caller":"traceutil/trace.go:171","msg":"trace[856797194] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"170.258553ms","start":"2024-07-29T10:24:46.8607Z","end":"2024-07-29T10:24:47.030959Z","steps":["trace[856797194] 'process raft request'  (duration: 170.043405ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T10:24:59.327904Z","caller":"traceutil/trace.go:171","msg":"trace[76740972] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"233.1393ms","start":"2024-07-29T10:24:59.094751Z","end":"2024-07-29T10:24:59.32789Z","steps":["trace[76740972] 'process raft request'  (duration: 233.054114ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T10:25:37.651077Z","caller":"traceutil/trace.go:171","msg":"trace[2136260228] linearizableReadLoop","detail":"{readStateIndex:1571; appliedIndex:1570; }","duration":"119.730832ms","start":"2024-07-29T10:25:37.531316Z","end":"2024-07-29T10:25:37.651047Z","steps":["trace[2136260228] 'read index received'  (duration: 119.548204ms)","trace[2136260228] 'applied index is now lower than readState.Index'  (duration: 182.077µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T10:25:37.651823Z","caller":"traceutil/trace.go:171","msg":"trace[613066166] transaction","detail":"{read_only:false; response_revision:1520; number_of_response:1; }","duration":"210.390724ms","start":"2024-07-29T10:25:37.441422Z","end":"2024-07-29T10:25:37.651812Z","steps":["trace[613066166] 'process raft request'  (duration: 209.488857ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:25:37.652497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.002017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T10:25:37.653258Z","caller":"traceutil/trace.go:171","msg":"trace[1821470579] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:1520; }","duration":"121.977594ms","start":"2024-07-29T10:25:37.531268Z","end":"2024-07-29T10:25:37.653246Z","steps":["trace[1821470579] 'agreement among raft nodes before linearized reading'  (duration: 119.990641ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T10:26:10.095664Z","caller":"traceutil/trace.go:171","msg":"trace[1586516240] linearizableReadLoop","detail":"{readStateIndex:1723; appliedIndex:1722; }","duration":"318.187478ms","start":"2024-07-29T10:26:09.777464Z","end":"2024-07-29T10:26:10.095651Z","steps":["trace[1586516240] 'read index received'  (duration: 318.026338ms)","trace[1586516240] 'applied index is now lower than readState.Index'  (duration: 160.739µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T10:26:10.095921Z","caller":"traceutil/trace.go:171","msg":"trace[1816313529] transaction","detail":"{read_only:false; response_revision:1665; number_of_response:1; }","duration":"384.938535ms","start":"2024-07-29T10:26:09.710971Z","end":"2024-07-29T10:26:10.095909Z","steps":["trace[1816313529] 'process raft request'  (duration: 384.602483ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:26:10.096052Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T10:26:09.710952Z","time spent":"385.011765ms","remote":"127.0.0.1:36184","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1661 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-07-29T10:26:10.096213Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.746178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-29T10:26:10.096277Z","caller":"traceutil/trace.go:171","msg":"trace[671831025] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1665; }","duration":"318.832823ms","start":"2024-07-29T10:26:09.777437Z","end":"2024-07-29T10:26:10.09627Z","steps":["trace[671831025] 'agreement among raft nodes before linearized reading'  (duration: 318.744308ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:26:10.096302Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T10:26:09.777425Z","time spent":"318.870569ms","remote":"127.0.0.1:36102","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":9,"response size":30,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true "}
	{"level":"warn","ts":"2024-07-29T10:26:10.096459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.292767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-07-29T10:26:10.09657Z","caller":"traceutil/trace.go:171","msg":"trace[1250267455] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1665; }","duration":"264.424293ms","start":"2024-07-29T10:26:09.83214Z","end":"2024-07-29T10:26:10.096564Z","steps":["trace[1250267455] 'agreement among raft nodes before linearized reading'  (duration: 264.265303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:26:10.097379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.006506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-29T10:26:10.097428Z","caller":"traceutil/trace.go:171","msg":"trace[167439714] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1665; }","duration":"217.077453ms","start":"2024-07-29T10:26:09.88034Z","end":"2024-07-29T10:26:10.097418Z","steps":["trace[167439714] 'agreement among raft nodes before linearized reading'  (duration: 217.000217ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:26:10.097521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.520262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-29T10:26:10.097778Z","caller":"traceutil/trace.go:171","msg":"trace[1034892266] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1665; }","duration":"232.796592ms","start":"2024-07-29T10:26:09.864973Z","end":"2024-07-29T10:26:10.09777Z","steps":["trace[1034892266] 'agreement among raft nodes before linearized reading'  (duration: 232.526285ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T10:26:14.94637Z","caller":"traceutil/trace.go:171","msg":"trace[420954400] transaction","detail":"{read_only:false; response_revision:1677; number_of_response:1; }","duration":"218.31713ms","start":"2024-07-29T10:26:14.72804Z","end":"2024-07-29T10:26:14.946357Z","steps":["trace[420954400] 'process raft request'  (duration: 218.231021ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:31:32 up 9 min,  0 users,  load average: 0.56, 0.73, 0.54
	Linux addons-342031 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [665880b8788e6cc7b692f52c3a4dec3a71f68748b7970ff38dad1de60f7b0731] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0729 10:25:11.002781       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.226.119:443: connect: connection refused
	E0729 10:25:11.009969       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.226.119:443: connect: connection refused
	E0729 10:25:11.036984       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.226.119:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.226.119:443: connect: connection refused
	I0729 10:25:11.129096       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0729 10:25:30.586063       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.249.49"}
	I0729 10:25:44.317840       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 10:25:44.496219       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.27.159"}
	I0729 10:25:50.045882       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0729 10:25:51.078521       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 10:26:17.147217       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 10:26:42.910863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 10:26:42.910910       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 10:26:42.937800       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 10:26:42.937868       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 10:26:42.972879       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 10:26:42.972935       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 10:26:42.981462       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 10:26:42.981520       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 10:26:42.997288       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 10:26:42.997702       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0729 10:26:43.973821       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 10:26:43.999163       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 10:26:44.008492       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0729 10:28:08.175198       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.23.132"}
	
	
	==> kube-controller-manager [4f49233f8bdc242e81989422ff49ab64d41bea91e75a5e902c38498c8aa34291] <==
	W0729 10:29:19.217825       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:29:19.217922       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:29:36.990395       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:29:36.990697       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:29:47.422492       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:29:47.422597       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:30:11.030730       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:30:11.030834       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:30:13.923492       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:30:13.923686       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:30:17.228472       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:30:17.228647       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:30:21.896781       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:30:21.896816       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:30:50.590687       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:30:50.590870       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:30:57.293490       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:30:57.293739       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:31:05.097628       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:31:05.097757       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:31:12.996140       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:31:12.996199       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 10:31:30.275002       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 10:31:30.275133       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 10:31:30.830600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="12.45µs"
	
	
	==> kube-proxy [6a6a2c9fa4cd5b1c22709372448330cee907903e60cbb1304fce9e5be57abe23] <==
	I0729 10:23:27.854333       1 server_linux.go:69] "Using iptables proxy"
	I0729 10:23:27.874375       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.224"]
	I0729 10:23:27.982766       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 10:23:27.982861       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 10:23:27.982884       1 server_linux.go:165] "Using iptables Proxier"
	I0729 10:23:27.987783       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 10:23:27.988045       1 server.go:872] "Version info" version="v1.30.3"
	I0729 10:23:27.988076       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:23:27.992214       1 config.go:192] "Starting service config controller"
	I0729 10:23:27.992225       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 10:23:27.992246       1 config.go:101] "Starting endpoint slice config controller"
	I0729 10:23:27.992249       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 10:23:27.992632       1 config.go:319] "Starting node config controller"
	I0729 10:23:27.992640       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 10:23:28.092639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 10:23:28.092673       1 shared_informer.go:320] Caches are synced for node config
	I0729 10:23:28.092683       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7d87bbdda87a505f694af02bd69a0acedf24df447acc92bead4a95f26817026a] <==
	W0729 10:23:09.501344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 10:23:09.501490       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 10:23:09.524429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 10:23:09.524674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 10:23:09.524435       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 10:23:09.524857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 10:23:09.646567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 10:23:09.646803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 10:23:09.658779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 10:23:09.660136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 10:23:09.847089       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 10:23:09.847218       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 10:23:09.847717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 10:23:09.847838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 10:23:09.927311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 10:23:09.927406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 10:23:09.939264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 10:23:09.939422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 10:23:09.948624       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 10:23:09.948827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 10:23:09.984184       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 10:23:09.984282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 10:23:09.990489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 10:23:09.990634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0729 10:23:12.368327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 10:29:05 addons-342031 kubelet[1272]: I0729 10:29:05.492655    1272 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 10:29:11 addons-342031 kubelet[1272]: E0729 10:29:11.510882    1272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:29:11 addons-342031 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:29:11 addons-342031 kubelet[1272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:29:11 addons-342031 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:29:11 addons-342031 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:30:11 addons-342031 kubelet[1272]: E0729 10:30:11.510304    1272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:30:11 addons-342031 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:30:11 addons-342031 kubelet[1272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:30:11 addons-342031 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:30:11 addons-342031 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:30:14 addons-342031 kubelet[1272]: I0729 10:30:14.492474    1272 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 10:31:11 addons-342031 kubelet[1272]: E0729 10:31:11.510500    1272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:31:11 addons-342031 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:31:11 addons-342031 kubelet[1272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:31:11 addons-342031 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:31:11 addons-342031 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:31:15 addons-342031 kubelet[1272]: I0729 10:31:15.492260    1272 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 10:31:30 addons-342031 kubelet[1272]: I0729 10:31:30.852808    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-9tvlv" podStartSLOduration=201.291698991 podStartE2EDuration="3m23.852768927s" podCreationTimestamp="2024-07-29 10:28:07 +0000 UTC" firstStartedPulling="2024-07-29 10:28:08.61864373 +0000 UTC m=+297.286239332" lastFinishedPulling="2024-07-29 10:28:11.179713666 +0000 UTC m=+299.847309268" observedRunningTime="2024-07-29 10:28:11.872800536 +0000 UTC m=+300.540396157" watchObservedRunningTime="2024-07-29 10:31:30.852768927 +0000 UTC m=+499.520364545"
	Jul 29 10:31:32 addons-342031 kubelet[1272]: I0729 10:31:32.234278    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjkg2\" (UniqueName: \"kubernetes.io/projected/b347f8e7-4e0d-4d6c-98f1-e2325cffef0e-kube-api-access-vjkg2\") pod \"b347f8e7-4e0d-4d6c-98f1-e2325cffef0e\" (UID: \"b347f8e7-4e0d-4d6c-98f1-e2325cffef0e\") "
	Jul 29 10:31:32 addons-342031 kubelet[1272]: I0729 10:31:32.234335    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b347f8e7-4e0d-4d6c-98f1-e2325cffef0e-tmp-dir\") pod \"b347f8e7-4e0d-4d6c-98f1-e2325cffef0e\" (UID: \"b347f8e7-4e0d-4d6c-98f1-e2325cffef0e\") "
	Jul 29 10:31:32 addons-342031 kubelet[1272]: I0729 10:31:32.234794    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b347f8e7-4e0d-4d6c-98f1-e2325cffef0e-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "b347f8e7-4e0d-4d6c-98f1-e2325cffef0e" (UID: "b347f8e7-4e0d-4d6c-98f1-e2325cffef0e"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 29 10:31:32 addons-342031 kubelet[1272]: I0729 10:31:32.245349    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b347f8e7-4e0d-4d6c-98f1-e2325cffef0e-kube-api-access-vjkg2" (OuterVolumeSpecName: "kube-api-access-vjkg2") pod "b347f8e7-4e0d-4d6c-98f1-e2325cffef0e" (UID: "b347f8e7-4e0d-4d6c-98f1-e2325cffef0e"). InnerVolumeSpecName "kube-api-access-vjkg2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 10:31:32 addons-342031 kubelet[1272]: I0729 10:31:32.335485    1272 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vjkg2\" (UniqueName: \"kubernetes.io/projected/b347f8e7-4e0d-4d6c-98f1-e2325cffef0e-kube-api-access-vjkg2\") on node \"addons-342031\" DevicePath \"\""
	Jul 29 10:31:32 addons-342031 kubelet[1272]: I0729 10:31:32.335582    1272 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b347f8e7-4e0d-4d6c-98f1-e2325cffef0e-tmp-dir\") on node \"addons-342031\" DevicePath \"\""
	
	
	==> storage-provisioner [49ac166beb18a8bae38259ddef3712206a2bfee3cd8d4af1a79fc3569021ed8d] <==
	I0729 10:23:33.149814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 10:23:33.170370       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 10:23:33.170422       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 10:23:33.186206       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 10:23:33.186910       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f93029a-5378-4c23-a80c-b0508d8c0c0f", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-342031_b562a95d-a1b0-4a80-bdf0-e80a1848626e became leader
	I0729 10:23:33.190272       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-342031_b562a95d-a1b0-4a80-bdf0-e80a1848626e!
	I0729 10:23:33.291905       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-342031_b562a95d-a1b0-4a80-bdf0-e80a1848626e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-342031 -n addons-342031
helpers_test.go:261: (dbg) Run:  kubectl --context addons-342031 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (359.31s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-342031
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-342031: exit status 82 (2m0.46988263s)

                                                
                                                
-- stdout --
	* Stopping node "addons-342031"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-342031" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-342031
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-342031: exit status 11 (21.698104755s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-342031" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-342031
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-342031: exit status 11 (6.143384259s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-342031" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-342031
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-342031: exit status 11 (6.145692278s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-342031" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 node stop m02 -v=7 --alsologtostderr
E0729 10:44:25.435807   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:44:57.915494   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:45:47.357085   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.479411817s)

                                                
                                                
-- stdout --
	* Stopping node "ha-763049-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:44:13.879031   26738 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:44:13.879505   26738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:13.879605   26738 out.go:304] Setting ErrFile to fd 2...
	I0729 10:44:13.879618   26738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:13.879863   26738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:44:13.880210   26738 mustload.go:65] Loading cluster: ha-763049
	I0729 10:44:13.880618   26738 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:44:13.880640   26738 stop.go:39] StopHost: ha-763049-m02
	I0729 10:44:13.880973   26738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:44:13.881031   26738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:44:13.898383   26738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45035
	I0729 10:44:13.898987   26738 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:44:13.899589   26738 main.go:141] libmachine: Using API Version  1
	I0729 10:44:13.899618   26738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:44:13.900071   26738 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:44:13.902573   26738 out.go:177] * Stopping node "ha-763049-m02"  ...
	I0729 10:44:13.904358   26738 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 10:44:13.904386   26738 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:44:13.904647   26738 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 10:44:13.904691   26738 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:44:13.907869   26738 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:44:13.908360   26738 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:44:13.908392   26738 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:44:13.908556   26738 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:44:13.908717   26738 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:44:13.908901   26738 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:44:13.909072   26738 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	I0729 10:44:13.998360   26738 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 10:44:14.053821   26738 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 10:44:14.108889   26738 main.go:141] libmachine: Stopping "ha-763049-m02"...
	I0729 10:44:14.108914   26738 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:44:14.110592   26738 main.go:141] libmachine: (ha-763049-m02) Calling .Stop
	I0729 10:44:14.113846   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 0/120
	I0729 10:44:15.115903   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 1/120
	I0729 10:44:16.117226   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 2/120
	I0729 10:44:17.118460   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 3/120
	I0729 10:44:18.119659   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 4/120
	I0729 10:44:19.121797   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 5/120
	I0729 10:44:20.123455   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 6/120
	I0729 10:44:21.125484   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 7/120
	I0729 10:44:22.127931   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 8/120
	I0729 10:44:23.129860   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 9/120
	I0729 10:44:24.131531   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 10/120
	I0729 10:44:25.133236   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 11/120
	I0729 10:44:26.134475   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 12/120
	I0729 10:44:27.136008   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 13/120
	I0729 10:44:28.137960   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 14/120
	I0729 10:44:29.139347   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 15/120
	I0729 10:44:30.140807   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 16/120
	I0729 10:44:31.142239   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 17/120
	I0729 10:44:32.143560   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 18/120
	I0729 10:44:33.144878   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 19/120
	I0729 10:44:34.147160   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 20/120
	I0729 10:44:35.149017   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 21/120
	I0729 10:44:36.150345   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 22/120
	I0729 10:44:37.152086   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 23/120
	I0729 10:44:38.153602   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 24/120
	I0729 10:44:39.155130   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 25/120
	I0729 10:44:40.156382   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 26/120
	I0729 10:44:41.158052   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 27/120
	I0729 10:44:42.159489   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 28/120
	I0729 10:44:43.161323   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 29/120
	I0729 10:44:44.163289   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 30/120
	I0729 10:44:45.165214   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 31/120
	I0729 10:44:46.166608   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 32/120
	I0729 10:44:47.169053   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 33/120
	I0729 10:44:48.170293   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 34/120
	I0729 10:44:49.172114   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 35/120
	I0729 10:44:50.173545   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 36/120
	I0729 10:44:51.174974   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 37/120
	I0729 10:44:52.176227   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 38/120
	I0729 10:44:53.177677   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 39/120
	I0729 10:44:54.179331   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 40/120
	I0729 10:44:55.180763   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 41/120
	I0729 10:44:56.182003   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 42/120
	I0729 10:44:57.183446   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 43/120
	I0729 10:44:58.185188   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 44/120
	I0729 10:44:59.187239   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 45/120
	I0729 10:45:00.189314   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 46/120
	I0729 10:45:01.190837   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 47/120
	I0729 10:45:02.192389   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 48/120
	I0729 10:45:03.193712   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 49/120
	I0729 10:45:04.195341   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 50/120
	I0729 10:45:05.197228   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 51/120
	I0729 10:45:06.198425   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 52/120
	I0729 10:45:07.199729   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 53/120
	I0729 10:45:08.202283   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 54/120
	I0729 10:45:09.203958   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 55/120
	I0729 10:45:10.205243   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 56/120
	I0729 10:45:11.207134   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 57/120
	I0729 10:45:12.209146   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 58/120
	I0729 10:45:13.210493   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 59/120
	I0729 10:45:14.212244   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 60/120
	I0729 10:45:15.213827   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 61/120
	I0729 10:45:16.215454   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 62/120
	I0729 10:45:17.216691   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 63/120
	I0729 10:45:18.217923   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 64/120
	I0729 10:45:19.219760   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 65/120
	I0729 10:45:20.221413   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 66/120
	I0729 10:45:21.223506   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 67/120
	I0729 10:45:22.225759   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 68/120
	I0729 10:45:23.227591   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 69/120
	I0729 10:45:24.229324   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 70/120
	I0729 10:45:25.231486   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 71/120
	I0729 10:45:26.233037   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 72/120
	I0729 10:45:27.234749   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 73/120
	I0729 10:45:28.235912   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 74/120
	I0729 10:45:29.237695   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 75/120
	I0729 10:45:30.239161   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 76/120
	I0729 10:45:31.241419   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 77/120
	I0729 10:45:32.243081   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 78/120
	I0729 10:45:33.245245   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 79/120
	I0729 10:45:34.247131   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 80/120
	I0729 10:45:35.248583   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 81/120
	I0729 10:45:36.250090   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 82/120
	I0729 10:45:37.251459   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 83/120
	I0729 10:45:38.253112   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 84/120
	I0729 10:45:39.255184   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 85/120
	I0729 10:45:40.257209   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 86/120
	I0729 10:45:41.258520   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 87/120
	I0729 10:45:42.259776   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 88/120
	I0729 10:45:43.261083   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 89/120
	I0729 10:45:44.263191   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 90/120
	I0729 10:45:45.265179   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 91/120
	I0729 10:45:46.266407   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 92/120
	I0729 10:45:47.268647   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 93/120
	I0729 10:45:48.269930   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 94/120
	I0729 10:45:49.271827   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 95/120
	I0729 10:45:50.273201   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 96/120
	I0729 10:45:51.274450   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 97/120
	I0729 10:45:52.275794   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 98/120
	I0729 10:45:53.277926   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 99/120
	I0729 10:45:54.280308   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 100/120
	I0729 10:45:55.281587   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 101/120
	I0729 10:45:56.283269   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 102/120
	I0729 10:45:57.285228   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 103/120
	I0729 10:45:58.287300   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 104/120
	I0729 10:45:59.289038   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 105/120
	I0729 10:46:00.290449   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 106/120
	I0729 10:46:01.291905   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 107/120
	I0729 10:46:02.293290   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 108/120
	I0729 10:46:03.294902   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 109/120
	I0729 10:46:04.296822   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 110/120
	I0729 10:46:05.298678   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 111/120
	I0729 10:46:06.300286   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 112/120
	I0729 10:46:07.301822   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 113/120
	I0729 10:46:08.303270   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 114/120
	I0729 10:46:09.305260   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 115/120
	I0729 10:46:10.306758   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 116/120
	I0729 10:46:11.308263   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 117/120
	I0729 10:46:12.309877   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 118/120
	I0729 10:46:13.311594   26738 main.go:141] libmachine: (ha-763049-m02) Waiting for machine to stop 119/120
	I0729 10:46:14.312857   26738 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 10:46:14.312977   26738 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-763049 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr: exit status 3 (19.069634259s)

                                                
                                                
-- stdout --
	ha-763049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-763049-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:14.353914   27180 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:14.354036   27180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:14.354044   27180 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:14.354048   27180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:14.354223   27180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:46:14.354374   27180 out.go:298] Setting JSON to false
	I0729 10:46:14.354400   27180 mustload.go:65] Loading cluster: ha-763049
	I0729 10:46:14.354541   27180 notify.go:220] Checking for updates...
	I0729 10:46:14.354810   27180 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:46:14.354825   27180 status.go:255] checking status of ha-763049 ...
	I0729 10:46:14.355263   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:14.355318   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:14.374975   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I0729 10:46:14.375396   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:14.376048   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:14.376072   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:14.376422   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:14.376609   27180 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:46:14.378222   27180 status.go:330] ha-763049 host status = "Running" (err=<nil>)
	I0729 10:46:14.378238   27180 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:46:14.378552   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:14.378595   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:14.395415   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I0729 10:46:14.395964   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:14.396633   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:14.396661   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:14.397154   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:14.397350   27180 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:46:14.400592   27180 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:14.401195   27180 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:46:14.401224   27180 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:14.401368   27180 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:46:14.401759   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:14.401806   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:14.417109   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I0729 10:46:14.417555   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:14.418058   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:14.418081   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:14.418428   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:14.418618   27180 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:46:14.418842   27180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:14.418871   27180 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:46:14.421757   27180 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:14.422198   27180 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:46:14.422227   27180 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:14.422413   27180 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:46:14.422587   27180 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:46:14.422741   27180 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:46:14.422880   27180 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:46:14.509338   27180 ssh_runner.go:195] Run: systemctl --version
	I0729 10:46:14.517159   27180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:14.537375   27180 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:46:14.537403   27180 api_server.go:166] Checking apiserver status ...
	I0729 10:46:14.537435   27180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:46:14.555998   27180 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	W0729 10:46:14.566749   27180 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:46:14.566805   27180 ssh_runner.go:195] Run: ls
	I0729 10:46:14.571532   27180 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:46:14.576042   27180 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:46:14.576066   27180 status.go:422] ha-763049 apiserver status = Running (err=<nil>)
	I0729 10:46:14.576074   27180 status.go:257] ha-763049 status: &{Name:ha-763049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:14.576094   27180 status.go:255] checking status of ha-763049-m02 ...
	I0729 10:46:14.576367   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:14.576404   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:14.591544   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43619
	I0729 10:46:14.591997   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:14.592410   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:14.592429   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:14.592764   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:14.592954   27180 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:46:14.594528   27180 status.go:330] ha-763049-m02 host status = "Running" (err=<nil>)
	I0729 10:46:14.594542   27180 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:46:14.594837   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:14.594890   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:14.610221   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35965
	I0729 10:46:14.610617   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:14.611158   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:14.611181   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:14.611516   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:14.611717   27180 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:46:14.614669   27180 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:14.615287   27180 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:46:14.615314   27180 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:14.615486   27180 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:46:14.615771   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:14.615806   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:14.632700   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0729 10:46:14.633173   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:14.633665   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:14.633701   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:14.634055   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:14.634274   27180 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:46:14.634448   27180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:14.634467   27180 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:46:14.637667   27180 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:14.638152   27180 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:46:14.638182   27180 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:14.638340   27180 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:46:14.638539   27180 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:46:14.638752   27180 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:46:14.638920   27180 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	W0729 10:46:33.010966   27180 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0729 10:46:33.011050   27180 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0729 10:46:33.011064   27180 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:33.011087   27180 status.go:257] ha-763049-m02 status: &{Name:ha-763049-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:46:33.011103   27180 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:33.011110   27180 status.go:255] checking status of ha-763049-m03 ...
	I0729 10:46:33.011401   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:33.011445   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:33.027558   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0729 10:46:33.027975   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:33.028480   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:33.028503   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:33.028781   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:33.028960   27180 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:46:33.030691   27180 status.go:330] ha-763049-m03 host status = "Running" (err=<nil>)
	I0729 10:46:33.030718   27180 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:46:33.031022   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:33.031065   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:33.045522   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I0729 10:46:33.045968   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:33.046411   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:33.046430   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:33.046749   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:33.046926   27180 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:46:33.049471   27180 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:33.049873   27180 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:46:33.049908   27180 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:33.050105   27180 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:46:33.050377   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:33.050439   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:33.065676   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37835
	I0729 10:46:33.066112   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:33.066578   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:33.066604   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:33.066951   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:33.067142   27180 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:46:33.067354   27180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:33.067374   27180 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:46:33.069843   27180 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:33.070287   27180 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:46:33.070319   27180 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:33.070426   27180 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:46:33.070600   27180 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:46:33.070763   27180 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:46:33.070921   27180 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:46:33.156467   27180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:33.175256   27180 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:46:33.175291   27180 api_server.go:166] Checking apiserver status ...
	I0729 10:46:33.175334   27180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:46:33.192219   27180 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup
	W0729 10:46:33.203402   27180 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:46:33.203462   27180 ssh_runner.go:195] Run: ls
	I0729 10:46:33.208393   27180 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:46:33.214663   27180 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:46:33.214687   27180 status.go:422] ha-763049-m03 apiserver status = Running (err=<nil>)
	I0729 10:46:33.214724   27180 status.go:257] ha-763049-m03 status: &{Name:ha-763049-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:33.214746   27180 status.go:255] checking status of ha-763049-m04 ...
	I0729 10:46:33.215099   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:33.215150   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:33.230622   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42917
	I0729 10:46:33.231163   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:33.231680   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:33.231699   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:33.232015   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:33.232194   27180 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:46:33.233609   27180 status.go:330] ha-763049-m04 host status = "Running" (err=<nil>)
	I0729 10:46:33.233623   27180 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:46:33.233913   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:33.233954   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:33.249981   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0729 10:46:33.250431   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:33.250986   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:33.251012   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:33.251356   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:33.251547   27180 main.go:141] libmachine: (ha-763049-m04) Calling .GetIP
	I0729 10:46:33.254871   27180 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:33.255303   27180 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:46:33.255333   27180 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:33.255497   27180 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:46:33.255814   27180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:33.255853   27180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:33.270923   27180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I0729 10:46:33.271274   27180 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:33.271745   27180 main.go:141] libmachine: Using API Version  1
	I0729 10:46:33.271766   27180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:33.272223   27180 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:33.272399   27180 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:46:33.272584   27180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:33.272604   27180 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:46:33.275090   27180 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:33.275502   27180 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:46:33.275523   27180 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:33.275711   27180 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:46:33.275894   27180 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:46:33.276064   27180 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:46:33.276219   27180 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:46:33.364097   27180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:33.381932   27180 status.go:257] ha-763049-m04 status: &{Name:ha-763049-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-763049 -n ha-763049
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-763049 logs -n 25: (1.507315942s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile250926512/001/cp-test_ha-763049-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049:/home/docker/cp-test_ha-763049-m03_ha-763049.txt                      |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049 sudo cat                                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m03_ha-763049.txt                                |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m02:/home/docker/cp-test_ha-763049-m03_ha-763049-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m02 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m03_ha-763049-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04:/home/docker/cp-test_ha-763049-m03_ha-763049-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m04 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m03_ha-763049-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp testdata/cp-test.txt                                               | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile250926512/001/cp-test_ha-763049-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049:/home/docker/cp-test_ha-763049-m04_ha-763049.txt                      |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049 sudo cat                                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049.txt                                |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m02:/home/docker/cp-test_ha-763049-m04_ha-763049-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m02 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03:/home/docker/cp-test_ha-763049-m04_ha-763049-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m03 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-763049 node stop m02 -v=7                                                    | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:38:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:38:52.077459   22547 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:38:52.077714   22547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:38:52.077722   22547 out.go:304] Setting ErrFile to fd 2...
	I0729 10:38:52.077726   22547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:38:52.077902   22547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:38:52.078455   22547 out.go:298] Setting JSON to false
	I0729 10:38:52.079272   22547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1278,"bootTime":1722248254,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:38:52.079333   22547 start.go:139] virtualization: kvm guest
	I0729 10:38:52.081563   22547 out.go:177] * [ha-763049] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:38:52.082960   22547 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:38:52.083017   22547 notify.go:220] Checking for updates...
	I0729 10:38:52.085331   22547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:38:52.086636   22547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:38:52.087857   22547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:38:52.089105   22547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 10:38:52.090271   22547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:38:52.091526   22547 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:38:52.125699   22547 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 10:38:52.126898   22547 start.go:297] selected driver: kvm2
	I0729 10:38:52.126910   22547 start.go:901] validating driver "kvm2" against <nil>
	I0729 10:38:52.126921   22547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:38:52.127617   22547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:38:52.127697   22547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:38:52.142364   22547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:38:52.142428   22547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:38:52.142632   22547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:38:52.142722   22547 cni.go:84] Creating CNI manager for ""
	I0729 10:38:52.142737   22547 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 10:38:52.142744   22547 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:38:52.142814   22547 start.go:340] cluster config:
	{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 10:38:52.142911   22547 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:38:52.144472   22547 out.go:177] * Starting "ha-763049" primary control-plane node in "ha-763049" cluster
	I0729 10:38:52.145678   22547 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:38:52.145706   22547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:38:52.145714   22547 cache.go:56] Caching tarball of preloaded images
	I0729 10:38:52.145777   22547 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:38:52.145786   22547 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:38:52.146065   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:38:52.146083   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json: {Name:mk8944791de2b6e7d06bc31c24e321168e26f676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:38:52.146208   22547 start.go:360] acquireMachinesLock for ha-763049: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:38:52.146234   22547 start.go:364] duration metric: took 14.885µs to acquireMachinesLock for "ha-763049"
	I0729 10:38:52.146249   22547 start.go:93] Provisioning new machine with config: &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:38:52.146300   22547 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 10:38:52.148831   22547 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:38:52.148959   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:38:52.148994   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:38:52.163354   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0729 10:38:52.163778   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:38:52.164357   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:38:52.164374   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:38:52.164697   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:38:52.164913   22547 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:38:52.165057   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:38:52.165195   22547 start.go:159] libmachine.API.Create for "ha-763049" (driver="kvm2")
	I0729 10:38:52.165224   22547 client.go:168] LocalClient.Create starting
	I0729 10:38:52.165253   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 10:38:52.165282   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:38:52.165295   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:38:52.165355   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 10:38:52.165372   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:38:52.165390   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:38:52.165405   22547 main.go:141] libmachine: Running pre-create checks...
	I0729 10:38:52.165418   22547 main.go:141] libmachine: (ha-763049) Calling .PreCreateCheck
	I0729 10:38:52.165764   22547 main.go:141] libmachine: (ha-763049) Calling .GetConfigRaw
	I0729 10:38:52.166158   22547 main.go:141] libmachine: Creating machine...
	I0729 10:38:52.166170   22547 main.go:141] libmachine: (ha-763049) Calling .Create
	I0729 10:38:52.166298   22547 main.go:141] libmachine: (ha-763049) Creating KVM machine...
	I0729 10:38:52.167495   22547 main.go:141] libmachine: (ha-763049) DBG | found existing default KVM network
	I0729 10:38:52.168190   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:52.168065   22570 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0729 10:38:52.168221   22547 main.go:141] libmachine: (ha-763049) DBG | created network xml: 
	I0729 10:38:52.168240   22547 main.go:141] libmachine: (ha-763049) DBG | <network>
	I0729 10:38:52.168248   22547 main.go:141] libmachine: (ha-763049) DBG |   <name>mk-ha-763049</name>
	I0729 10:38:52.168255   22547 main.go:141] libmachine: (ha-763049) DBG |   <dns enable='no'/>
	I0729 10:38:52.168261   22547 main.go:141] libmachine: (ha-763049) DBG |   
	I0729 10:38:52.168269   22547 main.go:141] libmachine: (ha-763049) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 10:38:52.168277   22547 main.go:141] libmachine: (ha-763049) DBG |     <dhcp>
	I0729 10:38:52.168288   22547 main.go:141] libmachine: (ha-763049) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 10:38:52.168300   22547 main.go:141] libmachine: (ha-763049) DBG |     </dhcp>
	I0729 10:38:52.168329   22547 main.go:141] libmachine: (ha-763049) DBG |   </ip>
	I0729 10:38:52.168340   22547 main.go:141] libmachine: (ha-763049) DBG |   
	I0729 10:38:52.168345   22547 main.go:141] libmachine: (ha-763049) DBG | </network>
	I0729 10:38:52.168350   22547 main.go:141] libmachine: (ha-763049) DBG | 
	I0729 10:38:52.173436   22547 main.go:141] libmachine: (ha-763049) DBG | trying to create private KVM network mk-ha-763049 192.168.39.0/24...
	I0729 10:38:52.239432   22547 main.go:141] libmachine: (ha-763049) DBG | private KVM network mk-ha-763049 192.168.39.0/24 created
	I0729 10:38:52.239455   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:52.239376   22570 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:38:52.239466   22547 main.go:141] libmachine: (ha-763049) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049 ...
	I0729 10:38:52.239507   22547 main.go:141] libmachine: (ha-763049) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:38:52.239618   22547 main.go:141] libmachine: (ha-763049) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 10:38:52.480346   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:52.480196   22570 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa...
	I0729 10:38:52.553287   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:52.553150   22570 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/ha-763049.rawdisk...
	I0729 10:38:52.553315   22547 main.go:141] libmachine: (ha-763049) DBG | Writing magic tar header
	I0729 10:38:52.553326   22547 main.go:141] libmachine: (ha-763049) DBG | Writing SSH key tar header
	I0729 10:38:52.553334   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:52.553267   22570 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049 ...
	I0729 10:38:52.553467   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049
	I0729 10:38:52.553479   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049 (perms=drwx------)
	I0729 10:38:52.553485   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 10:38:52.553493   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:38:52.553502   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 10:38:52.553512   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 10:38:52.553524   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins
	I0729 10:38:52.553536   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 10:38:52.553570   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 10:38:52.553580   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 10:38:52.553588   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 10:38:52.553597   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 10:38:52.553607   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home
	I0729 10:38:52.553621   22547 main.go:141] libmachine: (ha-763049) DBG | Skipping /home - not owner
	I0729 10:38:52.553633   22547 main.go:141] libmachine: (ha-763049) Creating domain...
	I0729 10:38:52.554621   22547 main.go:141] libmachine: (ha-763049) define libvirt domain using xml: 
	I0729 10:38:52.554643   22547 main.go:141] libmachine: (ha-763049) <domain type='kvm'>
	I0729 10:38:52.554653   22547 main.go:141] libmachine: (ha-763049)   <name>ha-763049</name>
	I0729 10:38:52.554661   22547 main.go:141] libmachine: (ha-763049)   <memory unit='MiB'>2200</memory>
	I0729 10:38:52.554669   22547 main.go:141] libmachine: (ha-763049)   <vcpu>2</vcpu>
	I0729 10:38:52.554694   22547 main.go:141] libmachine: (ha-763049)   <features>
	I0729 10:38:52.554720   22547 main.go:141] libmachine: (ha-763049)     <acpi/>
	I0729 10:38:52.554733   22547 main.go:141] libmachine: (ha-763049)     <apic/>
	I0729 10:38:52.554741   22547 main.go:141] libmachine: (ha-763049)     <pae/>
	I0729 10:38:52.554754   22547 main.go:141] libmachine: (ha-763049)     
	I0729 10:38:52.554764   22547 main.go:141] libmachine: (ha-763049)   </features>
	I0729 10:38:52.554778   22547 main.go:141] libmachine: (ha-763049)   <cpu mode='host-passthrough'>
	I0729 10:38:52.554789   22547 main.go:141] libmachine: (ha-763049)   
	I0729 10:38:52.554806   22547 main.go:141] libmachine: (ha-763049)   </cpu>
	I0729 10:38:52.554817   22547 main.go:141] libmachine: (ha-763049)   <os>
	I0729 10:38:52.554824   22547 main.go:141] libmachine: (ha-763049)     <type>hvm</type>
	I0729 10:38:52.554834   22547 main.go:141] libmachine: (ha-763049)     <boot dev='cdrom'/>
	I0729 10:38:52.554841   22547 main.go:141] libmachine: (ha-763049)     <boot dev='hd'/>
	I0729 10:38:52.554851   22547 main.go:141] libmachine: (ha-763049)     <bootmenu enable='no'/>
	I0729 10:38:52.554861   22547 main.go:141] libmachine: (ha-763049)   </os>
	I0729 10:38:52.554889   22547 main.go:141] libmachine: (ha-763049)   <devices>
	I0729 10:38:52.554911   22547 main.go:141] libmachine: (ha-763049)     <disk type='file' device='cdrom'>
	I0729 10:38:52.554921   22547 main.go:141] libmachine: (ha-763049)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/boot2docker.iso'/>
	I0729 10:38:52.554930   22547 main.go:141] libmachine: (ha-763049)       <target dev='hdc' bus='scsi'/>
	I0729 10:38:52.554938   22547 main.go:141] libmachine: (ha-763049)       <readonly/>
	I0729 10:38:52.554942   22547 main.go:141] libmachine: (ha-763049)     </disk>
	I0729 10:38:52.554949   22547 main.go:141] libmachine: (ha-763049)     <disk type='file' device='disk'>
	I0729 10:38:52.554954   22547 main.go:141] libmachine: (ha-763049)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 10:38:52.554964   22547 main.go:141] libmachine: (ha-763049)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/ha-763049.rawdisk'/>
	I0729 10:38:52.554969   22547 main.go:141] libmachine: (ha-763049)       <target dev='hda' bus='virtio'/>
	I0729 10:38:52.554973   22547 main.go:141] libmachine: (ha-763049)     </disk>
	I0729 10:38:52.554981   22547 main.go:141] libmachine: (ha-763049)     <interface type='network'>
	I0729 10:38:52.554987   22547 main.go:141] libmachine: (ha-763049)       <source network='mk-ha-763049'/>
	I0729 10:38:52.554993   22547 main.go:141] libmachine: (ha-763049)       <model type='virtio'/>
	I0729 10:38:52.554999   22547 main.go:141] libmachine: (ha-763049)     </interface>
	I0729 10:38:52.555005   22547 main.go:141] libmachine: (ha-763049)     <interface type='network'>
	I0729 10:38:52.555017   22547 main.go:141] libmachine: (ha-763049)       <source network='default'/>
	I0729 10:38:52.555029   22547 main.go:141] libmachine: (ha-763049)       <model type='virtio'/>
	I0729 10:38:52.555037   22547 main.go:141] libmachine: (ha-763049)     </interface>
	I0729 10:38:52.555047   22547 main.go:141] libmachine: (ha-763049)     <serial type='pty'>
	I0729 10:38:52.555058   22547 main.go:141] libmachine: (ha-763049)       <target port='0'/>
	I0729 10:38:52.555068   22547 main.go:141] libmachine: (ha-763049)     </serial>
	I0729 10:38:52.555082   22547 main.go:141] libmachine: (ha-763049)     <console type='pty'>
	I0729 10:38:52.555093   22547 main.go:141] libmachine: (ha-763049)       <target type='serial' port='0'/>
	I0729 10:38:52.555124   22547 main.go:141] libmachine: (ha-763049)     </console>
	I0729 10:38:52.555141   22547 main.go:141] libmachine: (ha-763049)     <rng model='virtio'>
	I0729 10:38:52.555155   22547 main.go:141] libmachine: (ha-763049)       <backend model='random'>/dev/random</backend>
	I0729 10:38:52.555165   22547 main.go:141] libmachine: (ha-763049)     </rng>
	I0729 10:38:52.555170   22547 main.go:141] libmachine: (ha-763049)     
	I0729 10:38:52.555183   22547 main.go:141] libmachine: (ha-763049)     
	I0729 10:38:52.555196   22547 main.go:141] libmachine: (ha-763049)   </devices>
	I0729 10:38:52.555202   22547 main.go:141] libmachine: (ha-763049) </domain>
	I0729 10:38:52.555215   22547 main.go:141] libmachine: (ha-763049) 
	I0729 10:38:52.559449   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:ee:0c:a6 in network default
	I0729 10:38:52.560041   22547 main.go:141] libmachine: (ha-763049) Ensuring networks are active...
	I0729 10:38:52.560064   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:52.560625   22547 main.go:141] libmachine: (ha-763049) Ensuring network default is active
	I0729 10:38:52.560910   22547 main.go:141] libmachine: (ha-763049) Ensuring network mk-ha-763049 is active
	I0729 10:38:52.561453   22547 main.go:141] libmachine: (ha-763049) Getting domain xml...
	I0729 10:38:52.562179   22547 main.go:141] libmachine: (ha-763049) Creating domain...
	I0729 10:38:53.735908   22547 main.go:141] libmachine: (ha-763049) Waiting to get IP...
	I0729 10:38:53.736598   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:53.736950   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:53.736989   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:53.736936   22570 retry.go:31] will retry after 260.647868ms: waiting for machine to come up
	I0729 10:38:53.999384   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:53.999821   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:53.999848   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:53.999778   22570 retry.go:31] will retry after 243.571937ms: waiting for machine to come up
	I0729 10:38:54.245332   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:54.245771   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:54.245803   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:54.245719   22570 retry.go:31] will retry after 477.405182ms: waiting for machine to come up
	I0729 10:38:54.724279   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:54.724733   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:54.724761   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:54.724686   22570 retry.go:31] will retry after 464.831075ms: waiting for machine to come up
	I0729 10:38:55.191623   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:55.192040   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:55.192066   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:55.191991   22570 retry.go:31] will retry after 536.612949ms: waiting for machine to come up
	I0729 10:38:55.729749   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:55.730165   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:55.730193   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:55.730119   22570 retry.go:31] will retry after 906.452891ms: waiting for machine to come up
	I0729 10:38:56.638140   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:56.638490   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:56.638535   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:56.638457   22570 retry.go:31] will retry after 973.555192ms: waiting for machine to come up
	I0729 10:38:57.613156   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:57.613603   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:57.613629   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:57.613567   22570 retry.go:31] will retry after 1.052023326s: waiting for machine to come up
	I0729 10:38:58.666683   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:58.667140   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:58.667161   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:58.667090   22570 retry.go:31] will retry after 1.254632627s: waiting for machine to come up
	I0729 10:38:59.923484   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:59.923837   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:59.923874   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:59.923819   22570 retry.go:31] will retry after 1.530478535s: waiting for machine to come up
	I0729 10:39:01.455809   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:01.456172   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:39:01.456199   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:39:01.456125   22570 retry.go:31] will retry after 2.507484818s: waiting for machine to come up
	I0729 10:39:03.966003   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:03.966593   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:39:03.966619   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:39:03.966559   22570 retry.go:31] will retry after 2.741723138s: waiting for machine to come up
	I0729 10:39:06.711555   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:06.712166   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:39:06.712197   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:39:06.712118   22570 retry.go:31] will retry after 3.481820681s: waiting for machine to come up
	I0729 10:39:10.195728   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:10.196102   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:39:10.196129   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:39:10.196040   22570 retry.go:31] will retry after 5.393944744s: waiting for machine to come up
	I0729 10:39:15.593535   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.593908   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has current primary IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.593926   22547 main.go:141] libmachine: (ha-763049) Found IP for machine: 192.168.39.68
	I0729 10:39:15.593939   22547 main.go:141] libmachine: (ha-763049) Reserving static IP address...
	I0729 10:39:15.594243   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find host DHCP lease matching {name: "ha-763049", mac: "52:54:00:6d:89:08", ip: "192.168.39.68"} in network mk-ha-763049
	I0729 10:39:15.667923   22547 main.go:141] libmachine: (ha-763049) DBG | Getting to WaitForSSH function...
	I0729 10:39:15.667953   22547 main.go:141] libmachine: (ha-763049) Reserved static IP address: 192.168.39.68
	I0729 10:39:15.667966   22547 main.go:141] libmachine: (ha-763049) Waiting for SSH to be available...
	I0729 10:39:15.670365   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.670825   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:15.670866   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.670929   22547 main.go:141] libmachine: (ha-763049) DBG | Using SSH client type: external
	I0729 10:39:15.670947   22547 main.go:141] libmachine: (ha-763049) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa (-rw-------)
	I0729 10:39:15.671125   22547 main.go:141] libmachine: (ha-763049) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:39:15.671150   22547 main.go:141] libmachine: (ha-763049) DBG | About to run SSH command:
	I0729 10:39:15.671164   22547 main.go:141] libmachine: (ha-763049) DBG | exit 0
	I0729 10:39:15.794907   22547 main.go:141] libmachine: (ha-763049) DBG | SSH cmd err, output: <nil>: 
	I0729 10:39:15.795154   22547 main.go:141] libmachine: (ha-763049) KVM machine creation complete!
	I0729 10:39:15.795476   22547 main.go:141] libmachine: (ha-763049) Calling .GetConfigRaw
	I0729 10:39:15.796030   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:15.796287   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:15.796507   22547 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 10:39:15.796521   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:39:15.797891   22547 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 10:39:15.797909   22547 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 10:39:15.797916   22547 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 10:39:15.797924   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:15.800864   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.801186   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:15.801220   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.801409   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:15.801619   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:15.801777   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:15.801928   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:15.802109   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:15.802328   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:15.802340   22547 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 10:39:15.906351   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:39:15.906381   22547 main.go:141] libmachine: Detecting the provisioner...
	I0729 10:39:15.906391   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:15.909047   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.909393   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:15.909418   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.909590   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:15.909788   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:15.909938   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:15.910068   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:15.910223   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:15.910438   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:15.910450   22547 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 10:39:16.015990   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 10:39:16.016055   22547 main.go:141] libmachine: found compatible host: buildroot
	I0729 10:39:16.016062   22547 main.go:141] libmachine: Provisioning with buildroot...
	I0729 10:39:16.016069   22547 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:39:16.016332   22547 buildroot.go:166] provisioning hostname "ha-763049"
	I0729 10:39:16.016364   22547 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:39:16.016513   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.018985   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.019250   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.019289   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.019356   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.019528   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.019701   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.019878   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.020028   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:16.020187   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:16.020197   22547 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-763049 && echo "ha-763049" | sudo tee /etc/hostname
	I0729 10:39:16.137232   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-763049
	
	I0729 10:39:16.137259   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.139762   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.140063   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.140091   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.140247   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.140469   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.140641   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.140844   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.141007   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:16.141187   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:16.141203   22547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-763049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-763049/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-763049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:39:16.252141   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:39:16.252178   22547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 10:39:16.252220   22547 buildroot.go:174] setting up certificates
	I0729 10:39:16.252233   22547 provision.go:84] configureAuth start
	I0729 10:39:16.252248   22547 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:39:16.252538   22547 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:39:16.255138   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.255477   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.255498   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.255725   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.257976   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.258368   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.258394   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.258594   22547 provision.go:143] copyHostCerts
	I0729 10:39:16.258627   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:39:16.258672   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 10:39:16.258681   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:39:16.258783   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 10:39:16.258902   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:39:16.258922   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 10:39:16.258928   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:39:16.258957   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 10:39:16.258995   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:39:16.259011   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 10:39:16.259017   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:39:16.259039   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 10:39:16.259089   22547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.ha-763049 san=[127.0.0.1 192.168.39.68 ha-763049 localhost minikube]
	I0729 10:39:16.327424   22547 provision.go:177] copyRemoteCerts
	I0729 10:39:16.327477   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:39:16.327500   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.330353   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.330638   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.330674   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.330826   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.331034   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.331193   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.331319   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:16.413579   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 10:39:16.413642   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:39:16.438013   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 10:39:16.438077   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 10:39:16.462631   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 10:39:16.462694   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 10:39:16.486583   22547 provision.go:87] duration metric: took 234.338734ms to configureAuth
	I0729 10:39:16.486610   22547 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:39:16.486819   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:39:16.486904   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.489620   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.489972   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.490016   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.490225   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.490416   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.490562   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.490677   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.490902   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:16.491081   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:16.491099   22547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:39:16.769524   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:39:16.769547   22547 main.go:141] libmachine: Checking connection to Docker...
	I0729 10:39:16.769554   22547 main.go:141] libmachine: (ha-763049) Calling .GetURL
	I0729 10:39:16.770791   22547 main.go:141] libmachine: (ha-763049) DBG | Using libvirt version 6000000
	I0729 10:39:16.774633   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.775161   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.775181   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.775359   22547 main.go:141] libmachine: Docker is up and running!
	I0729 10:39:16.775368   22547 main.go:141] libmachine: Reticulating splines...
	I0729 10:39:16.775374   22547 client.go:171] duration metric: took 24.610141226s to LocalClient.Create
	I0729 10:39:16.775399   22547 start.go:167] duration metric: took 24.610203669s to libmachine.API.Create "ha-763049"
	I0729 10:39:16.775411   22547 start.go:293] postStartSetup for "ha-763049" (driver="kvm2")
	I0729 10:39:16.775423   22547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:39:16.775461   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:16.775699   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:39:16.775723   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.778044   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.778401   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.778427   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.778534   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.778727   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.778901   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.779070   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:16.861787   22547 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:39:16.866195   22547 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:39:16.866219   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 10:39:16.866291   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 10:39:16.866380   22547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 10:39:16.866391   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /etc/ssl/certs/110642.pem
	I0729 10:39:16.866495   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:39:16.876269   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:39:16.900206   22547 start.go:296] duration metric: took 124.78063ms for postStartSetup
	I0729 10:39:16.900263   22547 main.go:141] libmachine: (ha-763049) Calling .GetConfigRaw
	I0729 10:39:16.900853   22547 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:39:16.903365   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.903650   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.903680   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.903860   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:39:16.904020   22547 start.go:128] duration metric: took 24.757712223s to createHost
	I0729 10:39:16.904041   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.906106   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.906426   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.906447   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.906580   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.906739   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.906932   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.907049   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.907240   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:16.907452   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:16.907470   22547 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:39:17.011414   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249556.990672847
	
	I0729 10:39:17.011439   22547 fix.go:216] guest clock: 1722249556.990672847
	I0729 10:39:17.011448   22547 fix.go:229] Guest: 2024-07-29 10:39:16.990672847 +0000 UTC Remote: 2024-07-29 10:39:16.904031905 +0000 UTC m=+24.860037397 (delta=86.640942ms)
	I0729 10:39:17.011474   22547 fix.go:200] guest clock delta is within tolerance: 86.640942ms
	I0729 10:39:17.011479   22547 start.go:83] releasing machines lock for "ha-763049", held for 24.8652374s
	I0729 10:39:17.011496   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:17.011779   22547 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:39:17.014065   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.014378   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:17.014412   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.014510   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:17.014941   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:17.015161   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:17.015268   22547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:39:17.015304   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:17.015411   22547 ssh_runner.go:195] Run: cat /version.json
	I0729 10:39:17.015441   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:17.017842   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.018049   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.018194   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:17.018227   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.018352   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:17.018429   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:17.018452   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.018519   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:17.018632   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:17.018719   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:17.018777   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:17.018847   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:17.018917   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:17.019017   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:17.114171   22547 ssh_runner.go:195] Run: systemctl --version
	I0729 10:39:17.120439   22547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:39:17.281213   22547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:39:17.287189   22547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:39:17.287259   22547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:39:17.303804   22547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:39:17.303828   22547 start.go:495] detecting cgroup driver to use...
	I0729 10:39:17.303888   22547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:39:17.320281   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:39:17.334675   22547 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:39:17.334752   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:39:17.349587   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:39:17.366548   22547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:39:17.492781   22547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:39:17.656837   22547 docker.go:233] disabling docker service ...
	I0729 10:39:17.656936   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:39:17.671794   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:39:17.685030   22547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:39:17.815598   22547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:39:17.942350   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:39:17.956570   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:39:17.975328   22547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:39:17.975394   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:17.985796   22547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:39:17.985891   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:17.996359   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:18.006652   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:18.016976   22547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:39:18.027669   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:18.038037   22547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:18.055454   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:18.065608   22547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:39:18.075028   22547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 10:39:18.075090   22547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 10:39:18.089097   22547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:39:18.098583   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:39:18.223266   22547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:39:18.383865   22547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:39:18.383944   22547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:39:18.389094   22547 start.go:563] Will wait 60s for crictl version
	I0729 10:39:18.389150   22547 ssh_runner.go:195] Run: which crictl
	I0729 10:39:18.393115   22547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:39:18.432138   22547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:39:18.432214   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:39:18.460406   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:39:18.490525   22547 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:39:18.491777   22547 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:39:18.494271   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:18.494574   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:18.494593   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:18.494801   22547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:39:18.498974   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:39:18.512326   22547 kubeadm.go:883] updating cluster {Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 10:39:18.512428   22547 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:39:18.512477   22547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:39:18.550499   22547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 10:39:18.550569   22547 ssh_runner.go:195] Run: which lz4
	I0729 10:39:18.554554   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 10:39:18.554636   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 10:39:18.558721   22547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:39:18.558750   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 10:39:20.028271   22547 crio.go:462] duration metric: took 1.473651879s to copy over tarball
	I0729 10:39:20.028361   22547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:39:22.189762   22547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.161366488s)
	I0729 10:39:22.189805   22547 crio.go:469] duration metric: took 2.161483142s to extract the tarball
	I0729 10:39:22.189816   22547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:39:22.228114   22547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:39:22.272938   22547 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 10:39:22.272962   22547 cache_images.go:84] Images are preloaded, skipping loading
	I0729 10:39:22.272972   22547 kubeadm.go:934] updating node { 192.168.39.68 8443 v1.30.3 crio true true} ...
	I0729 10:39:22.273094   22547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-763049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:39:22.273173   22547 ssh_runner.go:195] Run: crio config
	I0729 10:39:22.316250   22547 cni.go:84] Creating CNI manager for ""
	I0729 10:39:22.316273   22547 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 10:39:22.316283   22547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:39:22.316308   22547 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-763049 NodeName:ha-763049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:39:22.316471   22547 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-763049"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:39:22.316499   22547 kube-vip.go:115] generating kube-vip config ...
	I0729 10:39:22.316550   22547 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 10:39:22.333161   22547 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 10:39:22.333284   22547 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 10:39:22.333350   22547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:39:22.343604   22547 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:39:22.343666   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 10:39:22.353543   22547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0729 10:39:22.371552   22547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:39:22.388445   22547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0729 10:39:22.405844   22547 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 10:39:22.422807   22547 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 10:39:22.426602   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:39:22.439119   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:39:22.572985   22547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:39:22.589815   22547 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049 for IP: 192.168.39.68
	I0729 10:39:22.589841   22547 certs.go:194] generating shared ca certs ...
	I0729 10:39:22.589872   22547 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:22.590034   22547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 10:39:22.590091   22547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 10:39:22.590106   22547 certs.go:256] generating profile certs ...
	I0729 10:39:22.590167   22547 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key
	I0729 10:39:22.590184   22547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt with IP's: []
	I0729 10:39:22.798588   22547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt ...
	I0729 10:39:22.798617   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt: {Name:mk8726fe8d9d70191efa461a421de8e0ef61240d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:22.798814   22547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key ...
	I0729 10:39:22.798832   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key: {Name:mk794f5476902a4cf64a0422faec2c5b4ffae7eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:22.798936   22547 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.be5b3cae
	I0729 10:39:22.798958   22547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.be5b3cae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.254]
	I0729 10:39:22.933457   22547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.be5b3cae ...
	I0729 10:39:22.933483   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.be5b3cae: {Name:mk6d7aa030326f6063141278dafe1a87a05ebef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:22.933649   22547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.be5b3cae ...
	I0729 10:39:22.933668   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.be5b3cae: {Name:mkfa65284d28fb8ca272ca0f1ccf2a74e2be20ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:22.933756   22547 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.be5b3cae -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt
	I0729 10:39:22.933863   22547 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.be5b3cae -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key
	I0729 10:39:22.933936   22547 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key
	I0729 10:39:22.933957   22547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt with IP's: []
	I0729 10:39:23.030337   22547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt ...
	I0729 10:39:23.030362   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt: {Name:mk78c05815a3562526ae4c6c617ba0906af3cc32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:23.030525   22547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key ...
	I0729 10:39:23.030540   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key: {Name:mkec8714f7255ef23612611d8205d8b099bcce62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:23.030637   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 10:39:23.030661   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 10:39:23.030679   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 10:39:23.030715   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 10:39:23.030734   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 10:39:23.030752   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 10:39:23.030771   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 10:39:23.030788   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 10:39:23.030855   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 10:39:23.030905   22547 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 10:39:23.030924   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:39:23.030957   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:39:23.030986   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:39:23.031014   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 10:39:23.031075   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:39:23.031108   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem -> /usr/share/ca-certificates/11064.pem
	I0729 10:39:23.031126   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /usr/share/ca-certificates/110642.pem
	I0729 10:39:23.031144   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:39:23.031689   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:39:23.071341   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:39:23.100071   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:39:23.129684   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:39:23.156536   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 10:39:23.180536   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 10:39:23.204104   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:39:23.227728   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:39:23.252831   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 10:39:23.276665   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 10:39:23.301230   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:39:23.325711   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:39:23.346356   22547 ssh_runner.go:195] Run: openssl version
	I0729 10:39:23.354160   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 10:39:23.369101   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 10:39:23.380780   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 10:39:23.380851   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 10:39:23.389325   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 10:39:23.404598   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 10:39:23.417480   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 10:39:23.427076   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 10:39:23.427142   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 10:39:23.434240   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:39:23.445181   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:39:23.456244   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:39:23.461063   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:39:23.461109   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:39:23.466971   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:39:23.477718   22547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:39:23.481837   22547 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 10:39:23.481893   22547 kubeadm.go:392] StartCluster: {Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:39:23.481969   22547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 10:39:23.482037   22547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 10:39:23.520583   22547 cri.go:89] found id: ""
	I0729 10:39:23.520655   22547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:39:23.530886   22547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:39:23.541172   22547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:39:23.550920   22547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:39:23.550938   22547 kubeadm.go:157] found existing configuration files:
	
	I0729 10:39:23.550992   22547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 10:39:23.560298   22547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:39:23.560355   22547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:39:23.569852   22547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 10:39:23.579097   22547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:39:23.579181   22547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:39:23.588698   22547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 10:39:23.597909   22547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:39:23.597972   22547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:39:23.607922   22547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 10:39:23.617362   22547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:39:23.617417   22547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:39:23.627121   22547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:39:23.733375   22547 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 10:39:23.733497   22547 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:39:23.875084   22547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:39:23.875227   22547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:39:23.875391   22547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:39:24.085855   22547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:39:24.378338   22547 out.go:204]   - Generating certificates and keys ...
	I0729 10:39:24.378467   22547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:39:24.378574   22547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:39:24.378748   22547 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 10:39:24.405756   22547 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 10:39:24.456478   22547 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 10:39:24.596679   22547 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 10:39:24.702522   22547 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 10:39:24.702675   22547 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-763049 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I0729 10:39:24.764306   22547 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 10:39:24.764447   22547 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-763049 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I0729 10:39:24.971011   22547 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 10:39:25.207262   22547 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 10:39:25.362609   22547 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 10:39:25.362695   22547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:39:25.506790   22547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:39:25.708122   22547 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 10:39:26.061258   22547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:39:26.117983   22547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:39:26.454725   22547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:39:26.456829   22547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:39:26.459297   22547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:39:26.461316   22547 out.go:204]   - Booting up control plane ...
	I0729 10:39:26.461429   22547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:39:26.461529   22547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:39:26.461638   22547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:39:26.475981   22547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:39:26.476848   22547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:39:26.476889   22547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:39:26.606132   22547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 10:39:26.606225   22547 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 10:39:27.119209   22547 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.740178ms
	I0729 10:39:27.119279   22547 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 10:39:33.233694   22547 kubeadm.go:310] [api-check] The API server is healthy after 6.118585367s
	I0729 10:39:33.253736   22547 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:39:33.270849   22547 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:39:33.800546   22547 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:39:33.800738   22547 kubeadm.go:310] [mark-control-plane] Marking the node ha-763049 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:39:33.815384   22547 kubeadm.go:310] [bootstrap-token] Using token: 6vmhhd.ltmhhdran4o8516u
	I0729 10:39:33.816915   22547 out.go:204]   - Configuring RBAC rules ...
	I0729 10:39:33.817040   22547 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:39:33.824083   22547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:39:33.838609   22547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:39:33.842317   22547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:39:33.846144   22547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:39:33.849951   22547 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:39:33.867391   22547 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:39:34.111922   22547 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:39:34.641225   22547 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:39:34.642031   22547 kubeadm.go:310] 
	I0729 10:39:34.642097   22547 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:39:34.642118   22547 kubeadm.go:310] 
	I0729 10:39:34.642197   22547 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:39:34.642204   22547 kubeadm.go:310] 
	I0729 10:39:34.642289   22547 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:39:34.642363   22547 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:39:34.642414   22547 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:39:34.642421   22547 kubeadm.go:310] 
	I0729 10:39:34.642466   22547 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:39:34.642481   22547 kubeadm.go:310] 
	I0729 10:39:34.642522   22547 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:39:34.642528   22547 kubeadm.go:310] 
	I0729 10:39:34.642569   22547 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:39:34.642643   22547 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:39:34.642719   22547 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:39:34.642740   22547 kubeadm.go:310] 
	I0729 10:39:34.642960   22547 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:39:34.643039   22547 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:39:34.643047   22547 kubeadm.go:310] 
	I0729 10:39:34.643122   22547 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6vmhhd.ltmhhdran4o8516u \
	I0729 10:39:34.643208   22547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 10:39:34.643227   22547 kubeadm.go:310] 	--control-plane 
	I0729 10:39:34.643242   22547 kubeadm.go:310] 
	I0729 10:39:34.643372   22547 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:39:34.643384   22547 kubeadm.go:310] 
	I0729 10:39:34.643503   22547 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6vmhhd.ltmhhdran4o8516u \
	I0729 10:39:34.643663   22547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 10:39:34.644251   22547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:39:34.644323   22547 cni.go:84] Creating CNI manager for ""
	I0729 10:39:34.644340   22547 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 10:39:34.646388   22547 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 10:39:34.647902   22547 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 10:39:34.658214   22547 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 10:39:34.658237   22547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 10:39:34.679004   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 10:39:35.041167   22547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:39:35.041240   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:35.041267   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-763049 minikube.k8s.io/updated_at=2024_07_29T10_39_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=ha-763049 minikube.k8s.io/primary=true
	I0729 10:39:35.063579   22547 ops.go:34] apiserver oom_adj: -16
	I0729 10:39:35.190668   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:35.690992   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:36.190905   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:36.691217   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:37.190922   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:37.691236   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:38.191565   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:38.691530   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:39.190984   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:39.691059   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:40.190772   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:40.691719   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:41.190847   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:41.691012   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:42.191207   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:42.691224   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:43.191590   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:43.691339   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:44.190939   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:44.691132   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:45.191225   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:45.691501   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:46.190882   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:46.691697   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:47.191478   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:47.690991   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:48.191466   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:48.325149   22547 kubeadm.go:1113] duration metric: took 13.28397001s to wait for elevateKubeSystemPrivileges
	I0729 10:39:48.325188   22547 kubeadm.go:394] duration metric: took 24.843296888s to StartCluster
	I0729 10:39:48.325210   22547 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:48.325340   22547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:39:48.326287   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:48.326542   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 10:39:48.326552   22547 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:39:48.326578   22547 start.go:241] waiting for startup goroutines ...
	I0729 10:39:48.326586   22547 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 10:39:48.326645   22547 addons.go:69] Setting storage-provisioner=true in profile "ha-763049"
	I0729 10:39:48.326675   22547 addons.go:234] Setting addon storage-provisioner=true in "ha-763049"
	I0729 10:39:48.326687   22547 addons.go:69] Setting default-storageclass=true in profile "ha-763049"
	I0729 10:39:48.326717   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:39:48.326752   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:39:48.326793   22547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-763049"
	I0729 10:39:48.327142   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:48.327154   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:48.327175   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:48.327176   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:48.341990   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0729 10:39:48.342068   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32997
	I0729 10:39:48.342429   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:48.342537   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:48.343025   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:48.343044   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:48.343170   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:48.343194   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:48.343400   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:48.343517   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:48.343560   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:39:48.344094   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:48.344134   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:48.345884   22547 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:39:48.346221   22547 kapi.go:59] client config for ha-763049: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt", KeyFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key", CAFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:39:48.346725   22547 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 10:39:48.346995   22547 addons.go:234] Setting addon default-storageclass=true in "ha-763049"
	I0729 10:39:48.347038   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:39:48.347404   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:48.347436   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:48.359956   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0729 10:39:48.360526   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:48.361028   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:48.361047   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:48.361429   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:48.361622   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:39:48.362323   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0729 10:39:48.362722   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:48.363239   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:48.363262   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:48.363282   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:48.363605   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:48.364119   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:48.364165   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:48.365217   22547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:39:48.367384   22547 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:39:48.367405   22547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:39:48.367435   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:48.371124   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:48.371537   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:48.371575   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:48.371728   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:48.371917   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:48.372038   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:48.372216   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:48.380718   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I0729 10:39:48.381143   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:48.381629   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:48.381643   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:48.381934   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:48.382148   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:39:48.383621   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:48.383838   22547 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:39:48.383854   22547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:39:48.383872   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:48.386516   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:48.386912   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:48.386938   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:48.387072   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:48.387239   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:48.387389   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:48.387517   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:48.465561   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 10:39:48.532068   22547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:39:48.580774   22547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:39:48.828047   22547 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 10:39:49.029755   22547 main.go:141] libmachine: Making call to close driver server
	I0729 10:39:49.029789   22547 main.go:141] libmachine: (ha-763049) Calling .Close
	I0729 10:39:49.029797   22547 main.go:141] libmachine: Making call to close driver server
	I0729 10:39:49.029815   22547 main.go:141] libmachine: (ha-763049) Calling .Close
	I0729 10:39:49.030066   22547 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:39:49.030072   22547 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:39:49.030079   22547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:39:49.030086   22547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:39:49.030097   22547 main.go:141] libmachine: Making call to close driver server
	I0729 10:39:49.030106   22547 main.go:141] libmachine: (ha-763049) Calling .Close
	I0729 10:39:49.030088   22547 main.go:141] libmachine: Making call to close driver server
	I0729 10:39:49.030200   22547 main.go:141] libmachine: (ha-763049) Calling .Close
	I0729 10:39:49.030303   22547 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:39:49.030315   22547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:39:49.030443   22547 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:39:49.030462   22547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:39:49.030579   22547 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 10:39:49.030591   22547 round_trippers.go:469] Request Headers:
	I0729 10:39:49.030603   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:39:49.030614   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:39:49.043164   22547 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0729 10:39:49.043970   22547 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 10:39:49.043991   22547 round_trippers.go:469] Request Headers:
	I0729 10:39:49.044001   22547 round_trippers.go:473]     Content-Type: application/json
	I0729 10:39:49.044006   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:39:49.044012   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:39:49.047889   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:39:49.048057   22547 main.go:141] libmachine: Making call to close driver server
	I0729 10:39:49.048079   22547 main.go:141] libmachine: (ha-763049) Calling .Close
	I0729 10:39:49.048324   22547 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:39:49.048348   22547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:39:49.048350   22547 main.go:141] libmachine: (ha-763049) DBG | Closing plugin on server side
	I0729 10:39:49.050301   22547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 10:39:49.051545   22547 addons.go:510] duration metric: took 724.956749ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 10:39:49.051574   22547 start.go:246] waiting for cluster config update ...
	I0729 10:39:49.051584   22547 start.go:255] writing updated cluster config ...
	I0729 10:39:49.053174   22547 out.go:177] 
	I0729 10:39:49.054601   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:39:49.054686   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:39:49.056248   22547 out.go:177] * Starting "ha-763049-m02" control-plane node in "ha-763049" cluster
	I0729 10:39:49.057622   22547 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:39:49.057648   22547 cache.go:56] Caching tarball of preloaded images
	I0729 10:39:49.057758   22547 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:39:49.057772   22547 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:39:49.057863   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:39:49.058064   22547 start.go:360] acquireMachinesLock for ha-763049-m02: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:39:49.058126   22547 start.go:364] duration metric: took 32.207µs to acquireMachinesLock for "ha-763049-m02"
	I0729 10:39:49.058145   22547 start.go:93] Provisioning new machine with config: &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:39:49.058213   22547 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 10:39:49.059776   22547 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:39:49.059853   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:49.059878   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:49.074210   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42155
	I0729 10:39:49.074648   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:49.075131   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:49.075154   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:49.075459   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:49.075616   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetMachineName
	I0729 10:39:49.075762   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:39:49.075927   22547 start.go:159] libmachine.API.Create for "ha-763049" (driver="kvm2")
	I0729 10:39:49.075950   22547 client.go:168] LocalClient.Create starting
	I0729 10:39:49.075982   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 10:39:49.076019   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:39:49.076032   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:39:49.076079   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 10:39:49.076104   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:39:49.076114   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:39:49.076135   22547 main.go:141] libmachine: Running pre-create checks...
	I0729 10:39:49.076143   22547 main.go:141] libmachine: (ha-763049-m02) Calling .PreCreateCheck
	I0729 10:39:49.076282   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetConfigRaw
	I0729 10:39:49.076659   22547 main.go:141] libmachine: Creating machine...
	I0729 10:39:49.076673   22547 main.go:141] libmachine: (ha-763049-m02) Calling .Create
	I0729 10:39:49.076786   22547 main.go:141] libmachine: (ha-763049-m02) Creating KVM machine...
	I0729 10:39:49.077925   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found existing default KVM network
	I0729 10:39:49.078122   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found existing private KVM network mk-ha-763049
	I0729 10:39:49.078239   22547 main.go:141] libmachine: (ha-763049-m02) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02 ...
	I0729 10:39:49.078268   22547 main.go:141] libmachine: (ha-763049-m02) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:39:49.078331   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:49.078232   22945 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:39:49.078439   22547 main.go:141] libmachine: (ha-763049-m02) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 10:39:49.305736   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:49.305617   22945 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa...
	I0729 10:39:49.543100   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:49.542923   22945 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/ha-763049-m02.rawdisk...
	I0729 10:39:49.543134   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Writing magic tar header
	I0729 10:39:49.543150   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Writing SSH key tar header
	I0729 10:39:49.543164   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:49.543032   22945 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02 ...
	I0729 10:39:49.543180   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02
	I0729 10:39:49.543218   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02 (perms=drwx------)
	I0729 10:39:49.543244   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 10:39:49.543256   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 10:39:49.543284   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 10:39:49.543314   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:39:49.543335   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 10:39:49.543350   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 10:39:49.543362   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 10:39:49.543370   22547 main.go:141] libmachine: (ha-763049-m02) Creating domain...
	I0729 10:39:49.543380   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 10:39:49.543391   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 10:39:49.543404   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 10:39:49.543415   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home
	I0729 10:39:49.543432   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Skipping /home - not owner
	I0729 10:39:49.544268   22547 main.go:141] libmachine: (ha-763049-m02) define libvirt domain using xml: 
	I0729 10:39:49.544287   22547 main.go:141] libmachine: (ha-763049-m02) <domain type='kvm'>
	I0729 10:39:49.544322   22547 main.go:141] libmachine: (ha-763049-m02)   <name>ha-763049-m02</name>
	I0729 10:39:49.544347   22547 main.go:141] libmachine: (ha-763049-m02)   <memory unit='MiB'>2200</memory>
	I0729 10:39:49.544360   22547 main.go:141] libmachine: (ha-763049-m02)   <vcpu>2</vcpu>
	I0729 10:39:49.544366   22547 main.go:141] libmachine: (ha-763049-m02)   <features>
	I0729 10:39:49.544375   22547 main.go:141] libmachine: (ha-763049-m02)     <acpi/>
	I0729 10:39:49.544385   22547 main.go:141] libmachine: (ha-763049-m02)     <apic/>
	I0729 10:39:49.544394   22547 main.go:141] libmachine: (ha-763049-m02)     <pae/>
	I0729 10:39:49.544404   22547 main.go:141] libmachine: (ha-763049-m02)     
	I0729 10:39:49.544411   22547 main.go:141] libmachine: (ha-763049-m02)   </features>
	I0729 10:39:49.544422   22547 main.go:141] libmachine: (ha-763049-m02)   <cpu mode='host-passthrough'>
	I0729 10:39:49.544432   22547 main.go:141] libmachine: (ha-763049-m02)   
	I0729 10:39:49.544444   22547 main.go:141] libmachine: (ha-763049-m02)   </cpu>
	I0729 10:39:49.544455   22547 main.go:141] libmachine: (ha-763049-m02)   <os>
	I0729 10:39:49.544463   22547 main.go:141] libmachine: (ha-763049-m02)     <type>hvm</type>
	I0729 10:39:49.544474   22547 main.go:141] libmachine: (ha-763049-m02)     <boot dev='cdrom'/>
	I0729 10:39:49.544483   22547 main.go:141] libmachine: (ha-763049-m02)     <boot dev='hd'/>
	I0729 10:39:49.544492   22547 main.go:141] libmachine: (ha-763049-m02)     <bootmenu enable='no'/>
	I0729 10:39:49.544501   22547 main.go:141] libmachine: (ha-763049-m02)   </os>
	I0729 10:39:49.544510   22547 main.go:141] libmachine: (ha-763049-m02)   <devices>
	I0729 10:39:49.544520   22547 main.go:141] libmachine: (ha-763049-m02)     <disk type='file' device='cdrom'>
	I0729 10:39:49.544540   22547 main.go:141] libmachine: (ha-763049-m02)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/boot2docker.iso'/>
	I0729 10:39:49.544560   22547 main.go:141] libmachine: (ha-763049-m02)       <target dev='hdc' bus='scsi'/>
	I0729 10:39:49.544569   22547 main.go:141] libmachine: (ha-763049-m02)       <readonly/>
	I0729 10:39:49.544576   22547 main.go:141] libmachine: (ha-763049-m02)     </disk>
	I0729 10:39:49.544586   22547 main.go:141] libmachine: (ha-763049-m02)     <disk type='file' device='disk'>
	I0729 10:39:49.544598   22547 main.go:141] libmachine: (ha-763049-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 10:39:49.544615   22547 main.go:141] libmachine: (ha-763049-m02)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/ha-763049-m02.rawdisk'/>
	I0729 10:39:49.544631   22547 main.go:141] libmachine: (ha-763049-m02)       <target dev='hda' bus='virtio'/>
	I0729 10:39:49.544642   22547 main.go:141] libmachine: (ha-763049-m02)     </disk>
	I0729 10:39:49.544661   22547 main.go:141] libmachine: (ha-763049-m02)     <interface type='network'>
	I0729 10:39:49.544692   22547 main.go:141] libmachine: (ha-763049-m02)       <source network='mk-ha-763049'/>
	I0729 10:39:49.544704   22547 main.go:141] libmachine: (ha-763049-m02)       <model type='virtio'/>
	I0729 10:39:49.544723   22547 main.go:141] libmachine: (ha-763049-m02)     </interface>
	I0729 10:39:49.544744   22547 main.go:141] libmachine: (ha-763049-m02)     <interface type='network'>
	I0729 10:39:49.544756   22547 main.go:141] libmachine: (ha-763049-m02)       <source network='default'/>
	I0729 10:39:49.544767   22547 main.go:141] libmachine: (ha-763049-m02)       <model type='virtio'/>
	I0729 10:39:49.544785   22547 main.go:141] libmachine: (ha-763049-m02)     </interface>
	I0729 10:39:49.544810   22547 main.go:141] libmachine: (ha-763049-m02)     <serial type='pty'>
	I0729 10:39:49.544823   22547 main.go:141] libmachine: (ha-763049-m02)       <target port='0'/>
	I0729 10:39:49.544830   22547 main.go:141] libmachine: (ha-763049-m02)     </serial>
	I0729 10:39:49.544841   22547 main.go:141] libmachine: (ha-763049-m02)     <console type='pty'>
	I0729 10:39:49.544924   22547 main.go:141] libmachine: (ha-763049-m02)       <target type='serial' port='0'/>
	I0729 10:39:49.544971   22547 main.go:141] libmachine: (ha-763049-m02)     </console>
	I0729 10:39:49.544984   22547 main.go:141] libmachine: (ha-763049-m02)     <rng model='virtio'>
	I0729 10:39:49.544993   22547 main.go:141] libmachine: (ha-763049-m02)       <backend model='random'>/dev/random</backend>
	I0729 10:39:49.544998   22547 main.go:141] libmachine: (ha-763049-m02)     </rng>
	I0729 10:39:49.545004   22547 main.go:141] libmachine: (ha-763049-m02)     
	I0729 10:39:49.545009   22547 main.go:141] libmachine: (ha-763049-m02)     
	I0729 10:39:49.545016   22547 main.go:141] libmachine: (ha-763049-m02)   </devices>
	I0729 10:39:49.545022   22547 main.go:141] libmachine: (ha-763049-m02) </domain>
	I0729 10:39:49.545030   22547 main.go:141] libmachine: (ha-763049-m02) 
	I0729 10:39:49.552646   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:b4:66:7d in network default
	I0729 10:39:49.553493   22547 main.go:141] libmachine: (ha-763049-m02) Ensuring networks are active...
	I0729 10:39:49.553515   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:49.554254   22547 main.go:141] libmachine: (ha-763049-m02) Ensuring network default is active
	I0729 10:39:49.554624   22547 main.go:141] libmachine: (ha-763049-m02) Ensuring network mk-ha-763049 is active
	I0729 10:39:49.555013   22547 main.go:141] libmachine: (ha-763049-m02) Getting domain xml...
	I0729 10:39:49.555702   22547 main.go:141] libmachine: (ha-763049-m02) Creating domain...
	I0729 10:39:50.763183   22547 main.go:141] libmachine: (ha-763049-m02) Waiting to get IP...
	I0729 10:39:50.763960   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:50.764372   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:50.764399   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:50.764337   22945 retry.go:31] will retry after 256.083153ms: waiting for machine to come up
	I0729 10:39:51.021770   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:51.022192   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:51.022268   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:51.022169   22945 retry.go:31] will retry after 250.837815ms: waiting for machine to come up
	I0729 10:39:51.274592   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:51.275098   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:51.275128   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:51.275041   22945 retry.go:31] will retry after 336.627351ms: waiting for machine to come up
	I0729 10:39:51.613501   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:51.613936   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:51.613964   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:51.613892   22945 retry.go:31] will retry after 440.270957ms: waiting for machine to come up
	I0729 10:39:52.055499   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:52.055935   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:52.055970   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:52.055882   22945 retry.go:31] will retry after 625.822615ms: waiting for machine to come up
	I0729 10:39:52.683824   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:52.684295   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:52.684321   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:52.684252   22945 retry.go:31] will retry after 681.635336ms: waiting for machine to come up
	I0729 10:39:53.367191   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:53.367665   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:53.367715   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:53.367639   22945 retry.go:31] will retry after 904.805807ms: waiting for machine to come up
	I0729 10:39:54.274089   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:54.274530   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:54.274560   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:54.274470   22945 retry.go:31] will retry after 1.013356281s: waiting for machine to come up
	I0729 10:39:55.289617   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:55.290021   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:55.290041   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:55.289967   22945 retry.go:31] will retry after 1.217157419s: waiting for machine to come up
	I0729 10:39:56.508416   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:56.508746   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:56.508766   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:56.508703   22945 retry.go:31] will retry after 2.283747131s: waiting for machine to come up
	I0729 10:39:58.795274   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:58.795793   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:58.795820   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:58.795749   22945 retry.go:31] will retry after 2.363192954s: waiting for machine to come up
	I0729 10:40:01.160070   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:01.160516   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:40:01.160544   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:40:01.160462   22945 retry.go:31] will retry after 3.128051052s: waiting for machine to come up
	I0729 10:40:04.290282   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:04.290804   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:40:04.290826   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:40:04.290742   22945 retry.go:31] will retry after 3.748020631s: waiting for machine to come up
	I0729 10:40:08.041140   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:08.041486   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:40:08.041511   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:40:08.041446   22945 retry.go:31] will retry after 5.530915798s: waiting for machine to come up
	I0729 10:40:13.577470   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.577984   22547 main.go:141] libmachine: (ha-763049-m02) Found IP for machine: 192.168.39.39
	I0729 10:40:13.578013   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has current primary IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.578023   22547 main.go:141] libmachine: (ha-763049-m02) Reserving static IP address...
	I0729 10:40:13.578410   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find host DHCP lease matching {name: "ha-763049-m02", mac: "52:54:00:d3:91:e5", ip: "192.168.39.39"} in network mk-ha-763049
	I0729 10:40:13.652350   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Getting to WaitForSSH function...
	I0729 10:40:13.652378   22547 main.go:141] libmachine: (ha-763049-m02) Reserved static IP address: 192.168.39.39
	I0729 10:40:13.652416   22547 main.go:141] libmachine: (ha-763049-m02) Waiting for SSH to be available...
	I0729 10:40:13.655188   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.655588   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:13.655617   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.655808   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Using SSH client type: external
	I0729 10:40:13.655842   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa (-rw-------)
	I0729 10:40:13.655885   22547 main.go:141] libmachine: (ha-763049-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:40:13.655897   22547 main.go:141] libmachine: (ha-763049-m02) DBG | About to run SSH command:
	I0729 10:40:13.655916   22547 main.go:141] libmachine: (ha-763049-m02) DBG | exit 0
	I0729 10:40:13.787211   22547 main.go:141] libmachine: (ha-763049-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 10:40:13.787483   22547 main.go:141] libmachine: (ha-763049-m02) KVM machine creation complete!
	I0729 10:40:13.787810   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetConfigRaw
	I0729 10:40:13.788477   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:13.788687   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:13.788890   22547 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 10:40:13.788907   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:40:13.790123   22547 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 10:40:13.790139   22547 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 10:40:13.790146   22547 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 10:40:13.790154   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:13.792784   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.793200   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:13.793225   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.793465   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:13.793643   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:13.793830   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:13.794001   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:13.794172   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:13.794371   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:13.794382   22547 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 10:40:13.906181   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:40:13.906221   22547 main.go:141] libmachine: Detecting the provisioner...
	I0729 10:40:13.906231   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:13.908982   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.909389   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:13.909418   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.909554   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:13.909756   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:13.909905   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:13.910039   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:13.910178   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:13.910336   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:13.910346   22547 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 10:40:14.023378   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 10:40:14.023457   22547 main.go:141] libmachine: found compatible host: buildroot
	I0729 10:40:14.023473   22547 main.go:141] libmachine: Provisioning with buildroot...
	I0729 10:40:14.023483   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetMachineName
	I0729 10:40:14.023705   22547 buildroot.go:166] provisioning hostname "ha-763049-m02"
	I0729 10:40:14.023726   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetMachineName
	I0729 10:40:14.023937   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.026733   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.027077   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.027101   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.027235   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.027426   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.027593   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.027720   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.027896   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:14.028107   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:14.028124   22547 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-763049-m02 && echo "ha-763049-m02" | sudo tee /etc/hostname
	I0729 10:40:14.153617   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-763049-m02
	
	I0729 10:40:14.153650   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.156622   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.157064   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.157099   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.157259   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.157458   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.157623   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.157924   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.158092   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:14.158302   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:14.158321   22547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-763049-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-763049-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-763049-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:40:14.280573   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:40:14.280605   22547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 10:40:14.280658   22547 buildroot.go:174] setting up certificates
	I0729 10:40:14.280683   22547 provision.go:84] configureAuth start
	I0729 10:40:14.280701   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetMachineName
	I0729 10:40:14.280979   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:40:14.283489   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.283890   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.283917   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.284111   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.286590   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.286944   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.286987   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.287142   22547 provision.go:143] copyHostCerts
	I0729 10:40:14.287183   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:40:14.287223   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 10:40:14.287235   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:40:14.287307   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 10:40:14.287410   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:40:14.287434   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 10:40:14.287442   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:40:14.287484   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 10:40:14.287559   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:40:14.287581   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 10:40:14.287588   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:40:14.287625   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 10:40:14.287709   22547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.ha-763049-m02 san=[127.0.0.1 192.168.39.39 ha-763049-m02 localhost minikube]
	I0729 10:40:14.362963   22547 provision.go:177] copyRemoteCerts
	I0729 10:40:14.363020   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:40:14.363045   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.365626   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.365963   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.365991   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.366181   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.366373   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.366533   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.366659   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	I0729 10:40:14.453521   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 10:40:14.453593   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 10:40:14.479396   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 10:40:14.479470   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:40:14.505028   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 10:40:14.505093   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:40:14.528997   22547 provision.go:87] duration metric: took 248.298993ms to configureAuth
	I0729 10:40:14.529026   22547 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:40:14.529204   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:40:14.529286   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.531949   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.532260   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.532286   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.532426   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.532591   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.532748   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.532895   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.533051   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:14.533237   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:14.533255   22547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:40:14.801521   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:40:14.801548   22547 main.go:141] libmachine: Checking connection to Docker...
	I0729 10:40:14.801556   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetURL
	I0729 10:40:14.802764   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Using libvirt version 6000000
	I0729 10:40:14.805815   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.806245   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.806273   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.806461   22547 main.go:141] libmachine: Docker is up and running!
	I0729 10:40:14.806479   22547 main.go:141] libmachine: Reticulating splines...
	I0729 10:40:14.806485   22547 client.go:171] duration metric: took 25.730528228s to LocalClient.Create
	I0729 10:40:14.806507   22547 start.go:167] duration metric: took 25.730587462s to libmachine.API.Create "ha-763049"
	I0729 10:40:14.806516   22547 start.go:293] postStartSetup for "ha-763049-m02" (driver="kvm2")
	I0729 10:40:14.806526   22547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:40:14.806546   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:14.806794   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:40:14.806821   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.809076   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.809441   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.809468   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.809581   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.809717   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.809839   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.810057   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	I0729 10:40:14.898192   22547 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:40:14.902565   22547 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:40:14.902588   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 10:40:14.902662   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 10:40:14.902769   22547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 10:40:14.902781   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /etc/ssl/certs/110642.pem
	I0729 10:40:14.902862   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:40:14.913944   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:40:14.940699   22547 start.go:296] duration metric: took 134.171196ms for postStartSetup
	I0729 10:40:14.940755   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetConfigRaw
	I0729 10:40:14.941327   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:40:14.943504   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.943820   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.943852   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.944057   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:40:14.944253   22547 start.go:128] duration metric: took 25.88602743s to createHost
	I0729 10:40:14.944279   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.946518   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.946819   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.946880   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.946983   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.947128   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.947281   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.947409   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.947555   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:14.947712   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:14.947723   22547 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:40:15.059704   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249615.037551185
	
	I0729 10:40:15.059735   22547 fix.go:216] guest clock: 1722249615.037551185
	I0729 10:40:15.059747   22547 fix.go:229] Guest: 2024-07-29 10:40:15.037551185 +0000 UTC Remote: 2024-07-29 10:40:14.944265521 +0000 UTC m=+82.900271025 (delta=93.285664ms)
	I0729 10:40:15.059771   22547 fix.go:200] guest clock delta is within tolerance: 93.285664ms
	I0729 10:40:15.059782   22547 start.go:83] releasing machines lock for "ha-763049-m02", held for 26.001645056s
	I0729 10:40:15.059809   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:15.060129   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:40:15.062589   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.062932   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:15.062964   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.065431   22547 out.go:177] * Found network options:
	I0729 10:40:15.066951   22547 out.go:177]   - NO_PROXY=192.168.39.68
	W0729 10:40:15.068109   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 10:40:15.068144   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:15.068738   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:15.068946   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:15.069009   22547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:40:15.069049   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	W0729 10:40:15.069146   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 10:40:15.069224   22547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:40:15.069244   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:15.071950   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.072030   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.072308   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:15.072349   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.072376   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:15.072398   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.072497   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:15.072591   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:15.072675   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:15.072733   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:15.072792   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:15.072840   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:15.072996   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	I0729 10:40:15.072996   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	I0729 10:40:15.311326   22547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:40:15.317166   22547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:40:15.317235   22547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:40:15.333420   22547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:40:15.333442   22547 start.go:495] detecting cgroup driver to use...
	I0729 10:40:15.333499   22547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:40:15.349212   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:40:15.363556   22547 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:40:15.363621   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:40:15.377859   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:40:15.392260   22547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:40:15.508310   22547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:40:15.671262   22547 docker.go:233] disabling docker service ...
	I0729 10:40:15.671341   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:40:15.686239   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:40:15.699671   22547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:40:15.817382   22547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:40:15.944364   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:40:15.959074   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:40:15.979419   22547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:40:15.979485   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:15.990671   22547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:40:15.990761   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.001785   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.012564   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.022917   22547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:40:16.033413   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.043862   22547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.061875   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.073112   22547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:40:16.083230   22547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 10:40:16.083301   22547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 10:40:16.096536   22547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:40:16.107231   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:40:16.231158   22547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:40:16.371507   22547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:40:16.371590   22547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:40:16.377139   22547 start.go:563] Will wait 60s for crictl version
	I0729 10:40:16.377189   22547 ssh_runner.go:195] Run: which crictl
	I0729 10:40:16.381032   22547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:40:16.422442   22547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:40:16.422516   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:40:16.454744   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:40:16.484710   22547 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:40:16.486332   22547 out.go:177]   - env NO_PROXY=192.168.39.68
	I0729 10:40:16.487547   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:40:16.490155   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:16.490479   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:16.490515   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:16.490693   22547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:40:16.494942   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:40:16.508235   22547 mustload.go:65] Loading cluster: ha-763049
	I0729 10:40:16.508453   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:40:16.508709   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:40:16.508735   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:40:16.523202   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0729 10:40:16.523600   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:40:16.524011   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:40:16.524045   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:40:16.524344   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:40:16.524515   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:40:16.525982   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:40:16.526340   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:40:16.526367   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:40:16.540874   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I0729 10:40:16.541337   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:40:16.541781   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:40:16.541803   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:40:16.542152   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:40:16.542331   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:40:16.542538   22547 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049 for IP: 192.168.39.39
	I0729 10:40:16.542548   22547 certs.go:194] generating shared ca certs ...
	I0729 10:40:16.542560   22547 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:40:16.542741   22547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 10:40:16.542794   22547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 10:40:16.542806   22547 certs.go:256] generating profile certs ...
	I0729 10:40:16.542920   22547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key
	I0729 10:40:16.542947   22547 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.498053eb
	I0729 10:40:16.542965   22547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.498053eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.39 192.168.39.254]
	I0729 10:40:16.776120   22547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.498053eb ...
	I0729 10:40:16.776148   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.498053eb: {Name:mk76f5031f273c03270902394a7378060388e576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:40:16.776337   22547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.498053eb ...
	I0729 10:40:16.776353   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.498053eb: {Name:mk219b6f38ef315c3e77e8846f51b55e50556b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:40:16.776445   22547 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.498053eb -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt
	I0729 10:40:16.776602   22547 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.498053eb -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key
	I0729 10:40:16.776772   22547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key
	I0729 10:40:16.776792   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 10:40:16.776811   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 10:40:16.776830   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 10:40:16.776853   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 10:40:16.776869   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 10:40:16.776880   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 10:40:16.776897   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 10:40:16.776915   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 10:40:16.776977   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 10:40:16.777022   22547 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 10:40:16.777035   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:40:16.777072   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:40:16.777102   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:40:16.777129   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 10:40:16.777183   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:40:16.777227   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /usr/share/ca-certificates/110642.pem
	I0729 10:40:16.777247   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:40:16.777264   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem -> /usr/share/ca-certificates/11064.pem
	I0729 10:40:16.777301   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:40:16.780314   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:40:16.780682   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:40:16.780702   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:40:16.780891   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:40:16.781095   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:40:16.781235   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:40:16.781477   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:40:16.855155   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 10:40:16.860509   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 10:40:16.872756   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 10:40:16.877009   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 10:40:16.888497   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 10:40:16.893180   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 10:40:16.904543   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 10:40:16.909594   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 10:40:16.921641   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 10:40:16.926380   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 10:40:16.939815   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 10:40:16.944530   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 10:40:16.957272   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:40:16.983699   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:40:17.015306   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:40:17.039317   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:40:17.064417   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 10:40:17.089285   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 10:40:17.113583   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:40:17.138338   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:40:17.162527   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 10:40:17.186613   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:40:17.210724   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 10:40:17.235005   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 10:40:17.251740   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 10:40:17.269133   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 10:40:17.285894   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 10:40:17.302454   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 10:40:17.320009   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 10:40:17.336669   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 10:40:17.354521   22547 ssh_runner.go:195] Run: openssl version
	I0729 10:40:17.360601   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 10:40:17.372110   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 10:40:17.376682   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 10:40:17.376740   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 10:40:17.382734   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:40:17.394077   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:40:17.405356   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:40:17.409859   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:40:17.409922   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:40:17.415664   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:40:17.426940   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 10:40:17.438352   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 10:40:17.442890   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 10:40:17.442953   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 10:40:17.448872   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 10:40:17.460242   22547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:40:17.464460   22547 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 10:40:17.464510   22547 kubeadm.go:934] updating node {m02 192.168.39.39 8443 v1.30.3 crio true true} ...
	I0729 10:40:17.464598   22547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-763049-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:40:17.464628   22547 kube-vip.go:115] generating kube-vip config ...
	I0729 10:40:17.464679   22547 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 10:40:17.483432   22547 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 10:40:17.483502   22547 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 10:40:17.483572   22547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:40:17.494014   22547 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 10:40:17.494085   22547 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 10:40:17.504126   22547 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 10:40:17.504154   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 10:40:17.504157   22547 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 10:40:17.504226   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 10:40:17.504164   22547 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 10:40:17.508598   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 10:40:17.508624   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 10:40:26.588374   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:40:26.605887   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 10:40:26.605979   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 10:40:26.610585   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 10:40:26.610626   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 10:40:49.462433   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 10:40:49.462508   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 10:40:49.469293   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 10:40:49.469326   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 10:40:49.699988   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 10:40:49.709764   22547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 10:40:49.727350   22547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:40:49.746778   22547 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 10:40:49.765868   22547 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 10:40:49.770160   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:40:49.783283   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:40:49.897442   22547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:40:49.913547   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:40:49.913865   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:40:49.913898   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:40:49.929451   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0729 10:40:49.929930   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:40:49.930380   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:40:49.930401   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:40:49.930634   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:40:49.930830   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:40:49.931071   22547 start.go:317] joinCluster: &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:40:49.931184   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 10:40:49.931199   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:40:49.934458   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:40:49.934946   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:40:49.934972   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:40:49.935196   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:40:49.935349   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:40:49.935516   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:40:49.935649   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:40:50.096508   22547 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:40:50.096559   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xrik40.tjmw5hvghjzuo9u5 --discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-763049-m02 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443"
	I0729 10:41:13.101397   22547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xrik40.tjmw5hvghjzuo9u5 --discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-763049-m02 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443": (23.004815839s)
	I0729 10:41:13.101437   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 10:41:13.655844   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-763049-m02 minikube.k8s.io/updated_at=2024_07_29T10_41_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=ha-763049 minikube.k8s.io/primary=false
	I0729 10:41:13.789098   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-763049-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 10:41:13.916429   22547 start.go:319] duration metric: took 23.985355289s to joinCluster
	I0729 10:41:13.916505   22547 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:41:13.916779   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:41:13.918192   22547 out.go:177] * Verifying Kubernetes components...
	I0729 10:41:13.919632   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:41:14.203636   22547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:41:14.278045   22547 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:41:14.278505   22547 kapi.go:59] client config for ha-763049: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt", KeyFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key", CAFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 10:41:14.278584   22547 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.68:8443
	I0729 10:41:14.278875   22547 node_ready.go:35] waiting up to 6m0s for node "ha-763049-m02" to be "Ready" ...
	I0729 10:41:14.279004   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:14.279016   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:14.279028   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:14.279050   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:14.292508   22547 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 10:41:14.779193   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:14.779247   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:14.779259   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:14.779266   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:14.785629   22547 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 10:41:15.279581   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:15.279612   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:15.279622   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:15.279627   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:15.286025   22547 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 10:41:15.780020   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:15.780048   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:15.780059   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:15.780064   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:15.787539   22547 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 10:41:16.279395   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:16.279417   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:16.279425   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:16.279430   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:16.309331   22547 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0729 10:41:16.309871   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:16.779417   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:16.779440   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:16.779447   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:16.779451   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:16.783431   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:17.279414   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:17.279452   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:17.279463   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:17.279469   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:17.282994   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:17.780018   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:17.780044   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:17.780055   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:17.780061   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:17.783842   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:18.279519   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:18.279540   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:18.279548   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:18.279553   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:18.282922   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:18.779287   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:18.779307   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:18.779315   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:18.779319   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:18.782547   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:18.783444   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:19.279506   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:19.279536   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:19.279547   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:19.279551   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:19.283370   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:19.779049   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:19.779069   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:19.779083   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:19.779088   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:19.782314   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:20.279894   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:20.279917   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:20.279926   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:20.279930   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:20.283851   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:20.779982   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:20.780006   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:20.780018   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:20.780023   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:20.783876   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:20.784771   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:21.279958   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:21.279978   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:21.279987   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:21.279991   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:21.283232   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:21.779188   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:21.779209   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:21.779218   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:21.779221   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:21.782839   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:22.280052   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:22.280073   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:22.280080   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:22.280083   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:22.288062   22547 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 10:41:22.779995   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:22.780022   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:22.780034   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:22.780041   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:22.784016   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:22.784887   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:23.279265   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:23.279286   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:23.279294   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:23.279299   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:23.282941   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:23.779940   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:23.779962   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:23.779973   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:23.779978   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:23.783848   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:24.279543   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:24.279565   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:24.279573   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:24.279579   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:24.283010   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:24.780070   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:24.780092   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:24.780102   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:24.780107   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:24.783959   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:24.785130   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:25.279937   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:25.279959   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:25.279966   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:25.279971   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:25.283205   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:25.779232   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:25.779254   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:25.779262   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:25.779265   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:25.783726   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:26.279902   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:26.279923   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:26.279930   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:26.279934   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:26.284077   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:26.779923   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:26.779944   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:26.779952   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:26.779956   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:26.783347   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:27.279113   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:27.279135   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:27.279142   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:27.279148   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:27.282895   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:27.283912   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:27.779949   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:27.779971   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:27.779979   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:27.779984   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:27.783783   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:28.279736   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:28.279762   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:28.279772   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:28.279777   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:28.283208   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:28.779398   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:28.779426   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:28.779437   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:28.779443   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:28.782615   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:29.279939   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:29.279967   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:29.279977   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:29.279984   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:29.285806   22547 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 10:41:29.286535   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:29.779871   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:29.779893   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:29.779902   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:29.779906   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:29.784006   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:30.279363   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:30.279386   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:30.279395   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:30.279400   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:30.283478   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:30.779957   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:30.779985   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:30.779995   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:30.780001   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:30.783212   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:31.279254   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:31.279276   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:31.279283   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:31.279289   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:31.283909   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:31.780011   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:31.780037   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:31.780048   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:31.780055   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:31.783632   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:31.784159   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:32.279389   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:32.279409   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.279416   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.279422   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.283068   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:32.779139   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:32.779160   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.779168   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.779173   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.782381   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:32.783081   22547 node_ready.go:49] node "ha-763049-m02" has status "Ready":"True"
	I0729 10:41:32.783106   22547 node_ready.go:38] duration metric: took 18.5041845s for node "ha-763049-m02" to be "Ready" ...
	I0729 10:41:32.783115   22547 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:41:32.783183   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:41:32.783193   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.783200   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.783203   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.787652   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:32.793437   22547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.793505   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-l4n5p
	I0729 10:41:32.793510   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.793517   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.793522   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.796630   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:32.797283   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:32.797297   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.797303   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.797307   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.801151   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:32.801632   22547 pod_ready.go:92] pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:32.801649   22547 pod_ready.go:81] duration metric: took 8.190342ms for pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.801657   22547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.801706   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xxwnd
	I0729 10:41:32.801713   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.801720   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.801723   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.806250   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:32.806896   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:32.806909   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.806914   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.806920   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.810624   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:32.811138   22547 pod_ready.go:92] pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:32.811154   22547 pod_ready.go:81] duration metric: took 9.491176ms for pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.811162   22547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.811205   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049
	I0729 10:41:32.811212   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.811218   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.811222   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.813570   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:41:32.814298   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:32.814312   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.814319   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.814324   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.816372   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:41:32.816861   22547 pod_ready.go:92] pod "etcd-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:32.816879   22547 pod_ready.go:81] duration metric: took 5.711324ms for pod "etcd-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.816887   22547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.816932   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049-m02
	I0729 10:41:32.816939   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.816951   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.816958   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.819067   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:41:32.819638   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:32.819653   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.819659   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.819663   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.821529   22547 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 10:41:32.822180   22547 pod_ready.go:92] pod "etcd-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:32.822199   22547 pod_ready.go:81] duration metric: took 5.30456ms for pod "etcd-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.822217   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.979579   22547 request.go:629] Waited for 157.311219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049
	I0729 10:41:32.979644   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049
	I0729 10:41:32.979651   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.979661   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.979669   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.983237   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:33.180170   22547 request.go:629] Waited for 196.360554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:33.180246   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:33.180254   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:33.180262   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:33.180270   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:33.183720   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:33.184318   22547 pod_ready.go:92] pod "kube-apiserver-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:33.184336   22547 pod_ready.go:81] duration metric: took 362.111868ms for pod "kube-apiserver-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:33.184344   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:33.379539   22547 request.go:629] Waited for 195.133783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m02
	I0729 10:41:33.379612   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m02
	I0729 10:41:33.379618   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:33.379629   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:33.379636   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:33.382783   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:33.579862   22547 request.go:629] Waited for 196.249038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:33.579920   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:33.579925   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:33.579932   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:33.579935   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:33.583313   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:33.583929   22547 pod_ready.go:92] pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:33.583949   22547 pod_ready.go:81] duration metric: took 399.596683ms for pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:33.583962   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:33.780112   22547 request.go:629] Waited for 196.083438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049
	I0729 10:41:33.780175   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049
	I0729 10:41:33.780180   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:33.780190   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:33.780195   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:33.784281   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:33.979323   22547 request.go:629] Waited for 194.303521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:33.979387   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:33.979394   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:33.979405   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:33.979413   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:33.982854   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:33.983410   22547 pod_ready.go:92] pod "kube-controller-manager-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:33.983427   22547 pod_ready.go:81] duration metric: took 399.458344ms for pod "kube-controller-manager-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:33.983436   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:34.180000   22547 request.go:629] Waited for 196.505232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m02
	I0729 10:41:34.180055   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m02
	I0729 10:41:34.180060   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:34.180068   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:34.180072   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:34.183283   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:34.379207   22547 request.go:629] Waited for 195.29513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:34.379270   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:34.379275   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:34.379283   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:34.379286   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:34.382256   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:41:34.382826   22547 pod_ready.go:92] pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:34.382850   22547 pod_ready.go:81] duration metric: took 399.403885ms for pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:34.382862   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mhbk7" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:34.579871   22547 request.go:629] Waited for 196.931891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhbk7
	I0729 10:41:34.579939   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhbk7
	I0729 10:41:34.579946   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:34.579957   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:34.579969   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:34.583394   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:34.779620   22547 request.go:629] Waited for 195.368999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:34.779699   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:34.779710   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:34.779720   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:34.779726   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:34.782917   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:34.783536   22547 pod_ready.go:92] pod "kube-proxy-mhbk7" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:34.783553   22547 pod_ready.go:81] duration metric: took 400.684572ms for pod "kube-proxy-mhbk7" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:34.783562   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tf7wt" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:34.980174   22547 request.go:629] Waited for 196.526855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf7wt
	I0729 10:41:34.980233   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf7wt
	I0729 10:41:34.980239   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:34.980246   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:34.980251   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:34.983694   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.179724   22547 request.go:629] Waited for 195.358019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:35.179793   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:35.179798   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:35.179805   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:35.179809   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:35.182952   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.183581   22547 pod_ready.go:92] pod "kube-proxy-tf7wt" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:35.183598   22547 pod_ready.go:81] duration metric: took 400.030612ms for pod "kube-proxy-tf7wt" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:35.183607   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:35.379805   22547 request.go:629] Waited for 196.143402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049
	I0729 10:41:35.379888   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049
	I0729 10:41:35.379898   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:35.379911   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:35.379935   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:35.383257   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.579240   22547 request.go:629] Waited for 195.285053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:35.579312   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:35.579318   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:35.579329   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:35.579337   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:35.582755   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.583460   22547 pod_ready.go:92] pod "kube-scheduler-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:35.583484   22547 pod_ready.go:81] duration metric: took 399.871989ms for pod "kube-scheduler-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:35.583493   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:35.779637   22547 request.go:629] Waited for 196.083393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m02
	I0729 10:41:35.779725   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m02
	I0729 10:41:35.779733   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:35.779745   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:35.779758   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:35.782813   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.979984   22547 request.go:629] Waited for 196.384051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:35.980055   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:35.980063   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:35.980073   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:35.980081   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:35.983518   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.983938   22547 pod_ready.go:92] pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:35.983954   22547 pod_ready.go:81] duration metric: took 400.455357ms for pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:35.983965   22547 pod_ready.go:38] duration metric: took 3.200839818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:41:35.983985   22547 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:41:35.984030   22547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:41:36.000253   22547 api_server.go:72] duration metric: took 22.083700677s to wait for apiserver process to appear ...
	I0729 10:41:36.000278   22547 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:41:36.000301   22547 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0729 10:41:36.006393   22547 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0729 10:41:36.006457   22547 round_trippers.go:463] GET https://192.168.39.68:8443/version
	I0729 10:41:36.006464   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:36.006472   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:36.006477   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:36.007373   22547 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 10:41:36.007472   22547 api_server.go:141] control plane version: v1.30.3
	I0729 10:41:36.007486   22547 api_server.go:131] duration metric: took 7.203302ms to wait for apiserver health ...
	I0729 10:41:36.007493   22547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 10:41:36.179890   22547 request.go:629] Waited for 172.32872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:41:36.179939   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:41:36.179945   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:36.179954   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:36.179961   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:36.185713   22547 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 10:41:36.189832   22547 system_pods.go:59] 17 kube-system pods found
	I0729 10:41:36.189861   22547 system_pods.go:61] "coredns-7db6d8ff4d-l4n5p" [d8f32893-3406-4eed-990f-f490efab94d6] Running
	I0729 10:41:36.189866   22547 system_pods.go:61] "coredns-7db6d8ff4d-xxwnd" [76efda45-4871-46fb-8a27-2e94f75de9f4] Running
	I0729 10:41:36.189874   22547 system_pods.go:61] "etcd-ha-763049" [b16c09bc-d43b-4593-ae0c-a2b9f3500d64] Running
	I0729 10:41:36.189877   22547 system_pods.go:61] "etcd-ha-763049-m02" [8c207076-2477-4f60-8957-3a25961c47ae] Running
	I0729 10:41:36.189881   22547 system_pods.go:61] "kindnet-596ll" [19026f3d-836d-499b-91fa-e1a957cbad76] Running
	I0729 10:41:36.189885   22547 system_pods.go:61] "kindnet-fdmh5" [4ed222fa-9517-42bb-bbde-6632f91bda05] Running
	I0729 10:41:36.189890   22547 system_pods.go:61] "kube-apiserver-ha-763049" [a6f83bf6-9aab-4141-8f71-b5080c69a2f5] Running
	I0729 10:41:36.189893   22547 system_pods.go:61] "kube-apiserver-ha-763049-m02" [541ec415-7d3c-4cc8-a357-89b8f58fedb4] Running
	I0729 10:41:36.189897   22547 system_pods.go:61] "kube-controller-manager-ha-763049" [0418a717-be1c-49d8-bf44-b702209730e1] Running
	I0729 10:41:36.189902   22547 system_pods.go:61] "kube-controller-manager-ha-763049-m02" [249b8e43-6745-4ddc-844b-901568a9a8b6] Running
	I0729 10:41:36.189905   22547 system_pods.go:61] "kube-proxy-mhbk7" [b05b91ac-ef64-4bd2-9824-83723bddfef7] Running
	I0729 10:41:36.189908   22547 system_pods.go:61] "kube-proxy-tf7wt" [c6875d82-c011-4b56-8b51-0ec9f8ddb78a] Running
	I0729 10:41:36.189911   22547 system_pods.go:61] "kube-scheduler-ha-763049" [35288476-ae47-4955-bc9e-bdcf14642347] Running
	I0729 10:41:36.189914   22547 system_pods.go:61] "kube-scheduler-ha-763049-m02" [46df5d86-319e-4552-a864-802abcaf1376] Running
	I0729 10:41:36.189917   22547 system_pods.go:61] "kube-vip-ha-763049" [5f88bfd4-d887-4989-bf71-7a4459aa6655] Running
	I0729 10:41:36.189920   22547 system_pods.go:61] "kube-vip-ha-763049-m02" [f3ffc4c1-73a9-437c-87b2-b580550d4726] Running
	I0729 10:41:36.189925   22547 system_pods.go:61] "storage-provisioner" [d48db391-d5bb-4974-88d7-f5c71e3edb4a] Running
	I0729 10:41:36.189931   22547 system_pods.go:74] duration metric: took 182.432433ms to wait for pod list to return data ...
	I0729 10:41:36.189941   22547 default_sa.go:34] waiting for default service account to be created ...
	I0729 10:41:36.379320   22547 request.go:629] Waited for 189.29136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0729 10:41:36.379382   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0729 10:41:36.379387   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:36.379394   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:36.379397   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:36.382955   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:36.383287   22547 default_sa.go:45] found service account: "default"
	I0729 10:41:36.383305   22547 default_sa.go:55] duration metric: took 193.358744ms for default service account to be created ...
	I0729 10:41:36.383314   22547 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 10:41:36.580150   22547 request.go:629] Waited for 196.780261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:41:36.580216   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:41:36.580222   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:36.580229   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:36.580241   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:36.586303   22547 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 10:41:36.592134   22547 system_pods.go:86] 17 kube-system pods found
	I0729 10:41:36.592164   22547 system_pods.go:89] "coredns-7db6d8ff4d-l4n5p" [d8f32893-3406-4eed-990f-f490efab94d6] Running
	I0729 10:41:36.592172   22547 system_pods.go:89] "coredns-7db6d8ff4d-xxwnd" [76efda45-4871-46fb-8a27-2e94f75de9f4] Running
	I0729 10:41:36.592179   22547 system_pods.go:89] "etcd-ha-763049" [b16c09bc-d43b-4593-ae0c-a2b9f3500d64] Running
	I0729 10:41:36.592185   22547 system_pods.go:89] "etcd-ha-763049-m02" [8c207076-2477-4f60-8957-3a25961c47ae] Running
	I0729 10:41:36.592190   22547 system_pods.go:89] "kindnet-596ll" [19026f3d-836d-499b-91fa-e1a957cbad76] Running
	I0729 10:41:36.592196   22547 system_pods.go:89] "kindnet-fdmh5" [4ed222fa-9517-42bb-bbde-6632f91bda05] Running
	I0729 10:41:36.592201   22547 system_pods.go:89] "kube-apiserver-ha-763049" [a6f83bf6-9aab-4141-8f71-b5080c69a2f5] Running
	I0729 10:41:36.592207   22547 system_pods.go:89] "kube-apiserver-ha-763049-m02" [541ec415-7d3c-4cc8-a357-89b8f58fedb4] Running
	I0729 10:41:36.592213   22547 system_pods.go:89] "kube-controller-manager-ha-763049" [0418a717-be1c-49d8-bf44-b702209730e1] Running
	I0729 10:41:36.592219   22547 system_pods.go:89] "kube-controller-manager-ha-763049-m02" [249b8e43-6745-4ddc-844b-901568a9a8b6] Running
	I0729 10:41:36.592225   22547 system_pods.go:89] "kube-proxy-mhbk7" [b05b91ac-ef64-4bd2-9824-83723bddfef7] Running
	I0729 10:41:36.592230   22547 system_pods.go:89] "kube-proxy-tf7wt" [c6875d82-c011-4b56-8b51-0ec9f8ddb78a] Running
	I0729 10:41:36.592236   22547 system_pods.go:89] "kube-scheduler-ha-763049" [35288476-ae47-4955-bc9e-bdcf14642347] Running
	I0729 10:41:36.592245   22547 system_pods.go:89] "kube-scheduler-ha-763049-m02" [46df5d86-319e-4552-a864-802abcaf1376] Running
	I0729 10:41:36.592252   22547 system_pods.go:89] "kube-vip-ha-763049" [5f88bfd4-d887-4989-bf71-7a4459aa6655] Running
	I0729 10:41:36.592259   22547 system_pods.go:89] "kube-vip-ha-763049-m02" [f3ffc4c1-73a9-437c-87b2-b580550d4726] Running
	I0729 10:41:36.592264   22547 system_pods.go:89] "storage-provisioner" [d48db391-d5bb-4974-88d7-f5c71e3edb4a] Running
	I0729 10:41:36.592273   22547 system_pods.go:126] duration metric: took 208.951852ms to wait for k8s-apps to be running ...
	I0729 10:41:36.592285   22547 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 10:41:36.592333   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:41:36.609939   22547 system_svc.go:56] duration metric: took 17.644955ms WaitForService to wait for kubelet
	I0729 10:41:36.609971   22547 kubeadm.go:582] duration metric: took 22.693430585s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:41:36.610000   22547 node_conditions.go:102] verifying NodePressure condition ...
	I0729 10:41:36.779348   22547 request.go:629] Waited for 169.275297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes
	I0729 10:41:36.779426   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes
	I0729 10:41:36.779436   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:36.779445   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:36.779452   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:36.782874   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:36.783823   22547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:41:36.783851   22547 node_conditions.go:123] node cpu capacity is 2
	I0729 10:41:36.783864   22547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:41:36.783877   22547 node_conditions.go:123] node cpu capacity is 2
	I0729 10:41:36.783883   22547 node_conditions.go:105] duration metric: took 173.878271ms to run NodePressure ...
	I0729 10:41:36.783897   22547 start.go:241] waiting for startup goroutines ...
	I0729 10:41:36.783930   22547 start.go:255] writing updated cluster config ...
	I0729 10:41:36.786047   22547 out.go:177] 
	I0729 10:41:36.787598   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:41:36.787683   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:41:36.790645   22547 out.go:177] * Starting "ha-763049-m03" control-plane node in "ha-763049" cluster
	I0729 10:41:36.791960   22547 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:41:36.791995   22547 cache.go:56] Caching tarball of preloaded images
	I0729 10:41:36.792114   22547 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:41:36.792128   22547 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:41:36.792257   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:41:36.792456   22547 start.go:360] acquireMachinesLock for ha-763049-m03: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:41:36.792523   22547 start.go:364] duration metric: took 42.732µs to acquireMachinesLock for "ha-763049-m03"
	I0729 10:41:36.792551   22547 start.go:93] Provisioning new machine with config: &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:41:36.792669   22547 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 10:41:36.795151   22547 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:41:36.795244   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:41:36.795279   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:41:36.810095   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0729 10:41:36.810570   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:41:36.811038   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:41:36.811058   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:41:36.811432   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:41:36.811594   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetMachineName
	I0729 10:41:36.811756   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:41:36.811971   22547 start.go:159] libmachine.API.Create for "ha-763049" (driver="kvm2")
	I0729 10:41:36.812002   22547 client.go:168] LocalClient.Create starting
	I0729 10:41:36.812037   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 10:41:36.812085   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:41:36.812099   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:41:36.812161   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 10:41:36.812187   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:41:36.812202   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:41:36.812227   22547 main.go:141] libmachine: Running pre-create checks...
	I0729 10:41:36.812238   22547 main.go:141] libmachine: (ha-763049-m03) Calling .PreCreateCheck
	I0729 10:41:36.812408   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetConfigRaw
	I0729 10:41:36.812783   22547 main.go:141] libmachine: Creating machine...
	I0729 10:41:36.812797   22547 main.go:141] libmachine: (ha-763049-m03) Calling .Create
	I0729 10:41:36.812916   22547 main.go:141] libmachine: (ha-763049-m03) Creating KVM machine...
	I0729 10:41:36.814361   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found existing default KVM network
	I0729 10:41:36.814518   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found existing private KVM network mk-ha-763049
	I0729 10:41:36.814672   22547 main.go:141] libmachine: (ha-763049-m03) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03 ...
	I0729 10:41:36.814715   22547 main.go:141] libmachine: (ha-763049-m03) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:41:36.814792   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:36.814657   23477 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:41:36.814878   22547 main.go:141] libmachine: (ha-763049-m03) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 10:41:37.038880   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:37.038752   23477 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa...
	I0729 10:41:37.320257   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:37.320103   23477 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/ha-763049-m03.rawdisk...
	I0729 10:41:37.320296   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Writing magic tar header
	I0729 10:41:37.320311   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Writing SSH key tar header
	I0729 10:41:37.320324   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:37.320245   23477 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03 ...
	I0729 10:41:37.320397   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03
	I0729 10:41:37.320428   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 10:41:37.320442   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03 (perms=drwx------)
	I0729 10:41:37.320456   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 10:41:37.320467   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 10:41:37.320476   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 10:41:37.320489   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 10:41:37.320507   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 10:41:37.320518   22547 main.go:141] libmachine: (ha-763049-m03) Creating domain...
	I0729 10:41:37.320528   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:41:37.320540   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 10:41:37.320547   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 10:41:37.320555   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 10:41:37.320567   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home
	I0729 10:41:37.320579   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Skipping /home - not owner
	I0729 10:41:37.321448   22547 main.go:141] libmachine: (ha-763049-m03) define libvirt domain using xml: 
	I0729 10:41:37.321481   22547 main.go:141] libmachine: (ha-763049-m03) <domain type='kvm'>
	I0729 10:41:37.321517   22547 main.go:141] libmachine: (ha-763049-m03)   <name>ha-763049-m03</name>
	I0729 10:41:37.321539   22547 main.go:141] libmachine: (ha-763049-m03)   <memory unit='MiB'>2200</memory>
	I0729 10:41:37.321549   22547 main.go:141] libmachine: (ha-763049-m03)   <vcpu>2</vcpu>
	I0729 10:41:37.321560   22547 main.go:141] libmachine: (ha-763049-m03)   <features>
	I0729 10:41:37.321571   22547 main.go:141] libmachine: (ha-763049-m03)     <acpi/>
	I0729 10:41:37.321580   22547 main.go:141] libmachine: (ha-763049-m03)     <apic/>
	I0729 10:41:37.321588   22547 main.go:141] libmachine: (ha-763049-m03)     <pae/>
	I0729 10:41:37.321597   22547 main.go:141] libmachine: (ha-763049-m03)     
	I0729 10:41:37.321606   22547 main.go:141] libmachine: (ha-763049-m03)   </features>
	I0729 10:41:37.321621   22547 main.go:141] libmachine: (ha-763049-m03)   <cpu mode='host-passthrough'>
	I0729 10:41:37.321632   22547 main.go:141] libmachine: (ha-763049-m03)   
	I0729 10:41:37.321642   22547 main.go:141] libmachine: (ha-763049-m03)   </cpu>
	I0729 10:41:37.321650   22547 main.go:141] libmachine: (ha-763049-m03)   <os>
	I0729 10:41:37.321660   22547 main.go:141] libmachine: (ha-763049-m03)     <type>hvm</type>
	I0729 10:41:37.321669   22547 main.go:141] libmachine: (ha-763049-m03)     <boot dev='cdrom'/>
	I0729 10:41:37.321678   22547 main.go:141] libmachine: (ha-763049-m03)     <boot dev='hd'/>
	I0729 10:41:37.321701   22547 main.go:141] libmachine: (ha-763049-m03)     <bootmenu enable='no'/>
	I0729 10:41:37.321717   22547 main.go:141] libmachine: (ha-763049-m03)   </os>
	I0729 10:41:37.321729   22547 main.go:141] libmachine: (ha-763049-m03)   <devices>
	I0729 10:41:37.321741   22547 main.go:141] libmachine: (ha-763049-m03)     <disk type='file' device='cdrom'>
	I0729 10:41:37.321752   22547 main.go:141] libmachine: (ha-763049-m03)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/boot2docker.iso'/>
	I0729 10:41:37.321758   22547 main.go:141] libmachine: (ha-763049-m03)       <target dev='hdc' bus='scsi'/>
	I0729 10:41:37.321763   22547 main.go:141] libmachine: (ha-763049-m03)       <readonly/>
	I0729 10:41:37.321767   22547 main.go:141] libmachine: (ha-763049-m03)     </disk>
	I0729 10:41:37.321775   22547 main.go:141] libmachine: (ha-763049-m03)     <disk type='file' device='disk'>
	I0729 10:41:37.321781   22547 main.go:141] libmachine: (ha-763049-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 10:41:37.321801   22547 main.go:141] libmachine: (ha-763049-m03)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/ha-763049-m03.rawdisk'/>
	I0729 10:41:37.321821   22547 main.go:141] libmachine: (ha-763049-m03)       <target dev='hda' bus='virtio'/>
	I0729 10:41:37.321830   22547 main.go:141] libmachine: (ha-763049-m03)     </disk>
	I0729 10:41:37.321846   22547 main.go:141] libmachine: (ha-763049-m03)     <interface type='network'>
	I0729 10:41:37.321862   22547 main.go:141] libmachine: (ha-763049-m03)       <source network='mk-ha-763049'/>
	I0729 10:41:37.321873   22547 main.go:141] libmachine: (ha-763049-m03)       <model type='virtio'/>
	I0729 10:41:37.321885   22547 main.go:141] libmachine: (ha-763049-m03)     </interface>
	I0729 10:41:37.321893   22547 main.go:141] libmachine: (ha-763049-m03)     <interface type='network'>
	I0729 10:41:37.321908   22547 main.go:141] libmachine: (ha-763049-m03)       <source network='default'/>
	I0729 10:41:37.321917   22547 main.go:141] libmachine: (ha-763049-m03)       <model type='virtio'/>
	I0729 10:41:37.321925   22547 main.go:141] libmachine: (ha-763049-m03)     </interface>
	I0729 10:41:37.321932   22547 main.go:141] libmachine: (ha-763049-m03)     <serial type='pty'>
	I0729 10:41:37.321941   22547 main.go:141] libmachine: (ha-763049-m03)       <target port='0'/>
	I0729 10:41:37.321950   22547 main.go:141] libmachine: (ha-763049-m03)     </serial>
	I0729 10:41:37.321961   22547 main.go:141] libmachine: (ha-763049-m03)     <console type='pty'>
	I0729 10:41:37.321976   22547 main.go:141] libmachine: (ha-763049-m03)       <target type='serial' port='0'/>
	I0729 10:41:37.321987   22547 main.go:141] libmachine: (ha-763049-m03)     </console>
	I0729 10:41:37.321995   22547 main.go:141] libmachine: (ha-763049-m03)     <rng model='virtio'>
	I0729 10:41:37.322008   22547 main.go:141] libmachine: (ha-763049-m03)       <backend model='random'>/dev/random</backend>
	I0729 10:41:37.322016   22547 main.go:141] libmachine: (ha-763049-m03)     </rng>
	I0729 10:41:37.322023   22547 main.go:141] libmachine: (ha-763049-m03)     
	I0729 10:41:37.322031   22547 main.go:141] libmachine: (ha-763049-m03)     
	I0729 10:41:37.322044   22547 main.go:141] libmachine: (ha-763049-m03)   </devices>
	I0729 10:41:37.322058   22547 main.go:141] libmachine: (ha-763049-m03) </domain>
	I0729 10:41:37.322071   22547 main.go:141] libmachine: (ha-763049-m03) 
	I0729 10:41:37.328821   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:cc:8d:a0 in network default
	I0729 10:41:37.329372   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:37.329389   22547 main.go:141] libmachine: (ha-763049-m03) Ensuring networks are active...
	I0729 10:41:37.330107   22547 main.go:141] libmachine: (ha-763049-m03) Ensuring network default is active
	I0729 10:41:37.330478   22547 main.go:141] libmachine: (ha-763049-m03) Ensuring network mk-ha-763049 is active
	I0729 10:41:37.330893   22547 main.go:141] libmachine: (ha-763049-m03) Getting domain xml...
	I0729 10:41:37.331525   22547 main.go:141] libmachine: (ha-763049-m03) Creating domain...
	I0729 10:41:38.572522   22547 main.go:141] libmachine: (ha-763049-m03) Waiting to get IP...
	I0729 10:41:38.573255   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:38.573642   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:38.573666   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:38.573630   23477 retry.go:31] will retry after 283.776015ms: waiting for machine to come up
	I0729 10:41:38.859117   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:38.859615   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:38.859656   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:38.859584   23477 retry.go:31] will retry after 276.316276ms: waiting for machine to come up
	I0729 10:41:39.137149   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:39.137618   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:39.137646   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:39.137560   23477 retry.go:31] will retry after 374.250186ms: waiting for machine to come up
	I0729 10:41:39.513141   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:39.513645   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:39.513672   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:39.513596   23477 retry.go:31] will retry after 383.719849ms: waiting for machine to come up
	I0729 10:41:39.899203   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:39.899607   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:39.899630   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:39.899561   23477 retry.go:31] will retry after 613.157454ms: waiting for machine to come up
	I0729 10:41:40.514395   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:40.514823   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:40.514850   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:40.514776   23477 retry.go:31] will retry after 607.711486ms: waiting for machine to come up
	I0729 10:41:41.124558   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:41.125036   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:41.125057   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:41.124988   23477 retry.go:31] will retry after 770.107414ms: waiting for machine to come up
	I0729 10:41:41.896172   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:41.896509   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:41.896529   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:41.896488   23477 retry.go:31] will retry after 1.112790457s: waiting for machine to come up
	I0729 10:41:43.010762   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:43.011203   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:43.011231   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:43.011142   23477 retry.go:31] will retry after 1.188759429s: waiting for machine to come up
	I0729 10:41:44.201555   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:44.202020   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:44.202045   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:44.201959   23477 retry.go:31] will retry after 2.128868743s: waiting for machine to come up
	I0729 10:41:46.332974   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:46.333469   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:46.333489   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:46.333424   23477 retry.go:31] will retry after 2.338540862s: waiting for machine to come up
	I0729 10:41:48.674543   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:48.675063   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:48.675092   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:48.674985   23477 retry.go:31] will retry after 2.825286266s: waiting for machine to come up
	I0729 10:41:51.503884   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:51.504275   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:51.504303   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:51.504226   23477 retry.go:31] will retry after 3.995808267s: waiting for machine to come up
	I0729 10:41:55.503905   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:55.504276   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:55.504303   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:55.504232   23477 retry.go:31] will retry after 5.274642694s: waiting for machine to come up
	I0729 10:42:00.783710   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.784124   22547 main.go:141] libmachine: (ha-763049-m03) Found IP for machine: 192.168.39.123
	I0729 10:42:00.784143   22547 main.go:141] libmachine: (ha-763049-m03) Reserving static IP address...
	I0729 10:42:00.784156   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has current primary IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.784616   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find host DHCP lease matching {name: "ha-763049-m03", mac: "52:54:00:91:4b:ad", ip: "192.168.39.123"} in network mk-ha-763049
	I0729 10:42:00.859558   22547 main.go:141] libmachine: (ha-763049-m03) Reserved static IP address: 192.168.39.123
	I0729 10:42:00.859588   22547 main.go:141] libmachine: (ha-763049-m03) Waiting for SSH to be available...
	I0729 10:42:00.859603   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Getting to WaitForSSH function...
	I0729 10:42:00.862471   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.862925   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:00.862956   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.863191   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Using SSH client type: external
	I0729 10:42:00.863223   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa (-rw-------)
	I0729 10:42:00.863256   22547 main.go:141] libmachine: (ha-763049-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:42:00.863270   22547 main.go:141] libmachine: (ha-763049-m03) DBG | About to run SSH command:
	I0729 10:42:00.863288   22547 main.go:141] libmachine: (ha-763049-m03) DBG | exit 0
	I0729 10:42:00.986936   22547 main.go:141] libmachine: (ha-763049-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 10:42:00.987237   22547 main.go:141] libmachine: (ha-763049-m03) KVM machine creation complete!
	I0729 10:42:00.987562   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetConfigRaw
	I0729 10:42:00.988120   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:00.988380   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:00.988530   22547 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 10:42:00.988544   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:42:00.989877   22547 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 10:42:00.989894   22547 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 10:42:00.989901   22547 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 10:42:00.989907   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:00.992192   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.992695   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:00.992722   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.992932   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:00.993137   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:00.993286   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:00.993404   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:00.993541   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:00.993737   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:00.993748   22547 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 10:42:01.098289   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:42:01.098311   22547 main.go:141] libmachine: Detecting the provisioner...
	I0729 10:42:01.098319   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.101054   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.101439   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.101469   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.101635   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:01.101833   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.102026   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.102175   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:01.102322   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:01.102493   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:01.102505   22547 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 10:42:01.208807   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 10:42:01.208869   22547 main.go:141] libmachine: found compatible host: buildroot
	I0729 10:42:01.208879   22547 main.go:141] libmachine: Provisioning with buildroot...
	I0729 10:42:01.208888   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetMachineName
	I0729 10:42:01.209121   22547 buildroot.go:166] provisioning hostname "ha-763049-m03"
	I0729 10:42:01.209152   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetMachineName
	I0729 10:42:01.209365   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.212241   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.212632   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.212663   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.212808   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:01.213004   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.213163   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.213317   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:01.213478   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:01.213676   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:01.213695   22547 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-763049-m03 && echo "ha-763049-m03" | sudo tee /etc/hostname
	I0729 10:42:01.335398   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-763049-m03
	
	I0729 10:42:01.335425   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.338393   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.338771   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.338801   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.339032   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:01.339261   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.339431   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.339578   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:01.339720   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:01.339923   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:01.339942   22547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-763049-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-763049-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-763049-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:42:01.458069   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:42:01.458098   22547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 10:42:01.458120   22547 buildroot.go:174] setting up certificates
	I0729 10:42:01.458134   22547 provision.go:84] configureAuth start
	I0729 10:42:01.458144   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetMachineName
	I0729 10:42:01.458397   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:42:01.460935   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.461235   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.461257   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.461412   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.463357   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.463699   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.463738   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.463892   22547 provision.go:143] copyHostCerts
	I0729 10:42:01.463922   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:42:01.463962   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 10:42:01.463976   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:42:01.464047   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 10:42:01.464121   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:42:01.464138   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 10:42:01.464145   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:42:01.464169   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 10:42:01.464212   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:42:01.464228   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 10:42:01.464234   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:42:01.464254   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 10:42:01.464299   22547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.ha-763049-m03 san=[127.0.0.1 192.168.39.123 ha-763049-m03 localhost minikube]
	I0729 10:42:01.559347   22547 provision.go:177] copyRemoteCerts
	I0729 10:42:01.559402   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:42:01.559424   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.562058   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.562376   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.562399   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.562589   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:01.562787   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.562953   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:01.563088   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:42:01.646276   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 10:42:01.646354   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:42:01.676817   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 10:42:01.676901   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 10:42:01.703696   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 10:42:01.703771   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 10:42:01.727794   22547 provision.go:87] duration metric: took 269.645701ms to configureAuth
	I0729 10:42:01.727835   22547 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:42:01.728036   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:42:01.728098   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.730618   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.731041   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.731069   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.731216   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:01.731398   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.731554   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.731717   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:01.731884   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:01.732030   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:01.732044   22547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:42:02.002725   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:42:02.002756   22547 main.go:141] libmachine: Checking connection to Docker...
	I0729 10:42:02.002765   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetURL
	I0729 10:42:02.004097   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Using libvirt version 6000000
	I0729 10:42:02.006039   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.006323   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.006349   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.006552   22547 main.go:141] libmachine: Docker is up and running!
	I0729 10:42:02.006566   22547 main.go:141] libmachine: Reticulating splines...
	I0729 10:42:02.006572   22547 client.go:171] duration metric: took 25.194564051s to LocalClient.Create
	I0729 10:42:02.006592   22547 start.go:167] duration metric: took 25.194622863s to libmachine.API.Create "ha-763049"
	I0729 10:42:02.006602   22547 start.go:293] postStartSetup for "ha-763049-m03" (driver="kvm2")
	I0729 10:42:02.006615   22547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:42:02.006639   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:02.006915   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:42:02.006944   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:02.009239   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.009607   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.009629   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.009837   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:02.010060   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:02.010220   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:02.010366   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:42:02.098515   22547 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:42:02.103018   22547 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:42:02.103108   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 10:42:02.103243   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 10:42:02.103339   22547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 10:42:02.103352   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /etc/ssl/certs/110642.pem
	I0729 10:42:02.103455   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:42:02.113584   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:42:02.138624   22547 start.go:296] duration metric: took 132.00711ms for postStartSetup
	I0729 10:42:02.138682   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetConfigRaw
	I0729 10:42:02.139330   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:42:02.142115   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.142476   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.142507   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.142768   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:42:02.143010   22547 start.go:128] duration metric: took 25.350329223s to createHost
	I0729 10:42:02.143059   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:02.145150   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.145538   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.145565   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.145710   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:02.145900   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:02.146075   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:02.146252   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:02.146420   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:02.146585   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:02.146598   22547 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:42:02.251590   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249722.231340605
	
	I0729 10:42:02.251619   22547 fix.go:216] guest clock: 1722249722.231340605
	I0729 10:42:02.251626   22547 fix.go:229] Guest: 2024-07-29 10:42:02.231340605 +0000 UTC Remote: 2024-07-29 10:42:02.143036544 +0000 UTC m=+190.099042044 (delta=88.304061ms)
	I0729 10:42:02.251641   22547 fix.go:200] guest clock delta is within tolerance: 88.304061ms
	I0729 10:42:02.251647   22547 start.go:83] releasing machines lock for "ha-763049-m03", held for 25.459111224s
	I0729 10:42:02.251665   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:02.251992   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:42:02.254864   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.255211   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.255239   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.257560   22547 out.go:177] * Found network options:
	I0729 10:42:02.259099   22547 out.go:177]   - NO_PROXY=192.168.39.68,192.168.39.39
	W0729 10:42:02.260378   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 10:42:02.260399   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 10:42:02.260415   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:02.260975   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:02.261155   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:02.261266   22547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:42:02.261302   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	W0729 10:42:02.261413   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 10:42:02.261438   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 10:42:02.261502   22547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:42:02.261520   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:02.264239   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.264267   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.264585   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.264611   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.264667   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.264697   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.264715   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:02.264916   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:02.264925   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:02.265081   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:02.265102   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:02.265231   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:42:02.265275   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:02.265436   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:42:02.513293   22547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:42:02.519462   22547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:42:02.519521   22547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:42:02.537820   22547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:42:02.537847   22547 start.go:495] detecting cgroup driver to use...
	I0729 10:42:02.537916   22547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:42:02.556691   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:42:02.572916   22547 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:42:02.572972   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:42:02.587938   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:42:02.604178   22547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:42:02.726347   22547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:42:02.889051   22547 docker.go:233] disabling docker service ...
	I0729 10:42:02.889113   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:42:02.904429   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:42:02.918427   22547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:42:03.033462   22547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:42:03.158573   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:42:03.175815   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:42:03.196455   22547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:42:03.196523   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.207608   22547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:42:03.207678   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.221815   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.236397   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.247993   22547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:42:03.259518   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.270730   22547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.290394   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.301657   22547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:42:03.311564   22547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 10:42:03.311631   22547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 10:42:03.326084   22547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:42:03.335954   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:42:03.468795   22547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:42:03.613381   22547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:42:03.613459   22547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:42:03.618799   22547 start.go:563] Will wait 60s for crictl version
	I0729 10:42:03.618862   22547 ssh_runner.go:195] Run: which crictl
	I0729 10:42:03.623207   22547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:42:03.664675   22547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:42:03.664766   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:42:03.695015   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:42:03.727157   22547 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:42:03.728545   22547 out.go:177]   - env NO_PROXY=192.168.39.68
	I0729 10:42:03.729751   22547 out.go:177]   - env NO_PROXY=192.168.39.68,192.168.39.39
	I0729 10:42:03.731336   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:42:03.734069   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:03.734494   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:03.734517   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:03.734877   22547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:42:03.739268   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:42:03.752545   22547 mustload.go:65] Loading cluster: ha-763049
	I0729 10:42:03.752761   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:42:03.752994   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:42:03.753027   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:42:03.768550   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44491
	I0729 10:42:03.769040   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:42:03.769521   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:42:03.769549   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:42:03.769908   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:42:03.770102   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:42:03.771791   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:42:03.772073   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:42:03.772111   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:42:03.787097   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0729 10:42:03.787507   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:42:03.787989   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:42:03.788010   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:42:03.788396   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:42:03.788570   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:42:03.788754   22547 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049 for IP: 192.168.39.123
	I0729 10:42:03.788768   22547 certs.go:194] generating shared ca certs ...
	I0729 10:42:03.788785   22547 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:42:03.788933   22547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 10:42:03.788985   22547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 10:42:03.788997   22547 certs.go:256] generating profile certs ...
	I0729 10:42:03.789100   22547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key
	I0729 10:42:03.789134   22547 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.b4bf9b16
	I0729 10:42:03.789153   22547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.b4bf9b16 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.39 192.168.39.123 192.168.39.254]
	I0729 10:42:04.432556   22547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.b4bf9b16 ...
	I0729 10:42:04.432587   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.b4bf9b16: {Name:mk54eba0cd0267f06fc79c42e90265a04854925c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:42:04.432746   22547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.b4bf9b16 ...
	I0729 10:42:04.432760   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.b4bf9b16: {Name:mkce1453ef8f6513dd27f14d0c85cf6052412e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:42:04.432832   22547 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.b4bf9b16 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt
	I0729 10:42:04.432958   22547 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.b4bf9b16 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key
	I0729 10:42:04.433078   22547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key
	I0729 10:42:04.433092   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 10:42:04.433103   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 10:42:04.433117   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 10:42:04.433129   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 10:42:04.433141   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 10:42:04.433154   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 10:42:04.433165   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 10:42:04.433178   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 10:42:04.433224   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 10:42:04.433250   22547 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 10:42:04.433259   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:42:04.433279   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:42:04.433301   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:42:04.433321   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 10:42:04.433355   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:42:04.433379   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem -> /usr/share/ca-certificates/11064.pem
	I0729 10:42:04.433393   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /usr/share/ca-certificates/110642.pem
	I0729 10:42:04.433405   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:42:04.433435   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:42:04.436322   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:42:04.436735   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:42:04.436757   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:42:04.436958   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:42:04.437187   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:42:04.437331   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:42:04.437475   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:42:04.511056   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 10:42:04.517183   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 10:42:04.530480   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 10:42:04.535185   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 10:42:04.549387   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 10:42:04.554833   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 10:42:04.566042   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 10:42:04.571086   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 10:42:04.582122   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 10:42:04.586639   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 10:42:04.598544   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 10:42:04.603197   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 10:42:04.614394   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:42:04.641514   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:42:04.666585   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:42:04.692061   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:42:04.716643   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 10:42:04.742843   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 10:42:04.768917   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:42:04.794551   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:42:04.819655   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 10:42:04.852046   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 10:42:04.880019   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:42:04.905238   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 10:42:04.923313   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 10:42:04.942265   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 10:42:04.961839   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 10:42:04.979759   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 10:42:04.997918   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 10:42:05.015994   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 10:42:05.033334   22547 ssh_runner.go:195] Run: openssl version
	I0729 10:42:05.039448   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:42:05.051189   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:42:05.056006   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:42:05.056059   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:42:05.061814   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:42:05.073121   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 10:42:05.084028   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 10:42:05.088547   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 10:42:05.088610   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 10:42:05.095648   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 10:42:05.109825   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 10:42:05.121971   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 10:42:05.126482   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 10:42:05.126536   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 10:42:05.132602   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:42:05.144221   22547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:42:05.148550   22547 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 10:42:05.148613   22547 kubeadm.go:934] updating node {m03 192.168.39.123 8443 v1.30.3 crio true true} ...
	I0729 10:42:05.148724   22547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-763049-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:42:05.148754   22547 kube-vip.go:115] generating kube-vip config ...
	I0729 10:42:05.148797   22547 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 10:42:05.167127   22547 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 10:42:05.167200   22547 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 10:42:05.167265   22547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:42:05.178118   22547 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 10:42:05.178183   22547 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 10:42:05.188804   22547 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 10:42:05.188819   22547 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 10:42:05.188805   22547 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 10:42:05.188844   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 10:42:05.188861   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 10:42:05.188866   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:42:05.188930   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 10:42:05.188937   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 10:42:05.204877   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 10:42:05.204922   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 10:42:05.204946   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 10:42:05.204954   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 10:42:05.204975   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 10:42:05.204978   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 10:42:05.217596   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 10:42:05.217632   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 10:42:06.199315   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 10:42:06.209678   22547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 10:42:06.227689   22547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:42:06.247715   22547 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 10:42:06.265593   22547 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 10:42:06.269663   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:42:06.282648   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:42:06.398587   22547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:42:06.416025   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:42:06.416342   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:42:06.416384   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:42:06.431983   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0729 10:42:06.432483   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:42:06.432998   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:42:06.433017   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:42:06.433359   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:42:06.433543   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:42:06.433668   22547 start.go:317] joinCluster: &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:42:06.433837   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 10:42:06.433855   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:42:06.437148   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:42:06.437595   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:42:06.437620   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:42:06.437845   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:42:06.438022   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:42:06.438212   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:42:06.438360   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:42:06.603924   22547 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:42:06.603974   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s82u6a.99t20mc3mt933nji --discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-763049-m03 --control-plane --apiserver-advertise-address=192.168.39.123 --apiserver-bind-port=8443"
	I0729 10:42:30.505125   22547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s82u6a.99t20mc3mt933nji --discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-763049-m03 --control-plane --apiserver-advertise-address=192.168.39.123 --apiserver-bind-port=8443": (23.901124908s)
	I0729 10:42:30.505163   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 10:42:31.144168   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-763049-m03 minikube.k8s.io/updated_at=2024_07_29T10_42_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=ha-763049 minikube.k8s.io/primary=false
	I0729 10:42:31.265725   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-763049-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 10:42:31.385404   22547 start.go:319] duration metric: took 24.951730695s to joinCluster
	I0729 10:42:31.385486   22547 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:42:31.385796   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:42:31.387054   22547 out.go:177] * Verifying Kubernetes components...
	I0729 10:42:31.388293   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:42:31.676511   22547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:42:31.694385   22547 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:42:31.694727   22547 kapi.go:59] client config for ha-763049: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt", KeyFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key", CAFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 10:42:31.694814   22547 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.68:8443
	I0729 10:42:31.695067   22547 node_ready.go:35] waiting up to 6m0s for node "ha-763049-m03" to be "Ready" ...
	I0729 10:42:31.695149   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:31.695160   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:31.695171   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:31.695178   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:31.699115   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:32.195961   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:32.195988   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:32.195999   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:32.196006   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:32.200044   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:32.695406   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:32.695429   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:32.695439   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:32.695444   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:32.699072   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:33.195969   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:33.196000   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:33.196011   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:33.196017   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:33.199646   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:33.695318   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:33.695341   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:33.695349   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:33.695352   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:33.698906   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:33.699573   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:34.195399   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:34.195419   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:34.195426   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:34.195430   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:34.198775   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:34.695985   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:34.696005   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:34.696012   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:34.696015   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:34.699228   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:35.195283   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:35.195312   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:35.195322   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:35.195331   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:35.198836   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:35.695929   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:35.695954   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:35.695965   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:35.695975   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:35.699291   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:35.700070   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:36.195218   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:36.195237   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:36.195245   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:36.195250   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:36.198381   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:36.695672   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:36.695692   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:36.695699   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:36.695703   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:36.699011   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:37.195840   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:37.195866   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:37.195875   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:37.195879   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:37.199512   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:37.695967   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:37.695986   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:37.695994   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:37.695999   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:37.699898   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:37.700493   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:38.195896   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:38.195918   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:38.195924   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:38.195928   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:38.199317   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:38.695784   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:38.695833   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:38.695842   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:38.695846   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:38.699555   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:39.195874   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:39.195900   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:39.195908   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:39.195913   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:39.198957   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:39.695951   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:39.695978   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:39.695989   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:39.695995   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:39.699657   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:39.700709   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:40.195783   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:40.195808   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:40.195816   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:40.195820   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:40.199500   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:40.695534   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:40.695555   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:40.695568   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:40.695575   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:40.699277   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:41.195233   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:41.195258   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:41.195270   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:41.195276   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:41.198637   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:41.695981   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:41.696001   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:41.696009   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:41.696013   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:41.699726   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:42.195585   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:42.195606   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:42.195614   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:42.195617   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:42.199514   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:42.200029   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:42.695963   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:42.695986   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:42.695994   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:42.695997   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:42.699817   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:43.195465   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:43.195492   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:43.195503   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:43.195510   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:43.199125   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:43.695950   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:43.695971   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:43.695980   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:43.695984   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:43.699972   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:44.195258   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:44.195279   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:44.195287   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:44.195290   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:44.198755   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:44.695635   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:44.695660   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:44.695669   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:44.695672   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:44.699878   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:44.700573   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:45.195256   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:45.195277   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:45.195285   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:45.195290   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:45.198767   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:45.695996   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:45.696018   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:45.696025   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:45.696031   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:45.700276   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:46.195351   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:46.195377   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:46.195387   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:46.195394   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:46.199540   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:46.695304   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:46.695326   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:46.695338   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:46.695342   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:46.698665   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:47.196204   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:47.196224   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:47.196233   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:47.196238   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:47.199672   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:47.200315   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:47.695941   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:47.695966   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:47.695977   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:47.695982   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:47.699345   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:48.195978   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:48.196000   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:48.196010   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:48.196015   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:48.199859   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:48.695953   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:48.695975   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:48.695982   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:48.695986   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:48.699516   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:49.196239   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:49.196261   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.196272   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.196277   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.204307   22547 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 10:42:49.204859   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:49.696187   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:49.696208   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.696216   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.696224   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.699852   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:49.700522   22547 node_ready.go:49] node "ha-763049-m03" has status "Ready":"True"
	I0729 10:42:49.700542   22547 node_ready.go:38] duration metric: took 18.005457219s for node "ha-763049-m03" to be "Ready" ...
	I0729 10:42:49.700554   22547 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:42:49.700625   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:42:49.700639   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.700649   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.700659   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.708243   22547 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 10:42:49.717626   22547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.717729   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-l4n5p
	I0729 10:42:49.717742   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.717752   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.717761   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.720907   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:49.721510   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:49.721524   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.721532   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.721536   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.724105   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.724664   22547 pod_ready.go:92] pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:49.724682   22547 pod_ready.go:81] duration metric: took 7.026201ms for pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.724694   22547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.724743   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xxwnd
	I0729 10:42:49.724750   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.724758   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.724764   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.727647   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.728574   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:49.728587   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.728594   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.728599   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.731289   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.731762   22547 pod_ready.go:92] pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:49.731781   22547 pod_ready.go:81] duration metric: took 7.077531ms for pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.731792   22547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.731853   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049
	I0729 10:42:49.731864   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.731883   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.731891   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.734425   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.735055   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:49.735070   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.735080   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.735084   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.737477   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.738099   22547 pod_ready.go:92] pod "etcd-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:49.738114   22547 pod_ready.go:81] duration metric: took 6.314888ms for pod "etcd-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.738123   22547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.738169   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049-m02
	I0729 10:42:49.738175   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.738183   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.738188   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.740760   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.741292   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:49.741304   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.741311   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.741315   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.743846   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.744265   22547 pod_ready.go:92] pod "etcd-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:49.744287   22547 pod_ready.go:81] duration metric: took 6.154185ms for pod "etcd-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.744299   22547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.896690   22547 request.go:629] Waited for 152.298095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049-m03
	I0729 10:42:49.896762   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049-m03
	I0729 10:42:49.896769   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.896779   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.896791   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.900114   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:50.097131   22547 request.go:629] Waited for 196.262635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:50.097200   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:50.097210   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:50.097223   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:50.097231   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:50.102548   22547 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 10:42:50.103375   22547 pod_ready.go:92] pod "etcd-ha-763049-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:50.103401   22547 pod_ready.go:81] duration metric: took 359.075866ms for pod "etcd-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:50.103427   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:50.296496   22547 request.go:629] Waited for 192.981775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049
	I0729 10:42:50.296566   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049
	I0729 10:42:50.296574   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:50.296584   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:50.296594   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:50.300311   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:50.496524   22547 request.go:629] Waited for 195.383206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:50.496591   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:50.496596   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:50.496604   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:50.496607   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:50.499994   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:50.500616   22547 pod_ready.go:92] pod "kube-apiserver-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:50.500632   22547 pod_ready.go:81] duration metric: took 397.197271ms for pod "kube-apiserver-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:50.500641   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:50.696778   22547 request.go:629] Waited for 196.070156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m02
	I0729 10:42:50.696866   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m02
	I0729 10:42:50.696873   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:50.696880   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:50.696885   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:50.700503   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:50.896594   22547 request.go:629] Waited for 195.383469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:50.896663   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:50.896670   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:50.896682   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:50.896693   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:50.899978   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:50.900789   22547 pod_ready.go:92] pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:50.900807   22547 pod_ready.go:81] duration metric: took 400.160228ms for pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:50.900817   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:51.096852   22547 request.go:629] Waited for 195.971553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m03
	I0729 10:42:51.096920   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m03
	I0729 10:42:51.096928   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:51.096938   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:51.096953   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:51.100172   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:51.297170   22547 request.go:629] Waited for 196.426188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:51.297229   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:51.297236   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:51.297245   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:51.297252   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:51.301071   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:51.301735   22547 pod_ready.go:92] pod "kube-apiserver-ha-763049-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:51.301756   22547 pod_ready.go:81] duration metric: took 400.929181ms for pod "kube-apiserver-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:51.301768   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:51.497240   22547 request.go:629] Waited for 195.410619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049
	I0729 10:42:51.497294   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049
	I0729 10:42:51.497299   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:51.497306   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:51.497310   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:51.501004   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:51.696564   22547 request.go:629] Waited for 194.875696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:51.696618   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:51.696624   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:51.696634   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:51.696647   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:51.699832   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:51.700436   22547 pod_ready.go:92] pod "kube-controller-manager-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:51.700453   22547 pod_ready.go:81] duration metric: took 398.676665ms for pod "kube-controller-manager-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:51.700462   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:51.897112   22547 request.go:629] Waited for 196.578899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m02
	I0729 10:42:51.897178   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m02
	I0729 10:42:51.897185   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:51.897196   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:51.897203   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:51.900645   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.096984   22547 request.go:629] Waited for 195.392706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:52.097046   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:52.097052   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:52.097063   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:52.097075   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:52.100380   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.100903   22547 pod_ready.go:92] pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:52.100924   22547 pod_ready.go:81] duration metric: took 400.455217ms for pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:52.100937   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:52.296526   22547 request.go:629] Waited for 195.503229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m03
	I0729 10:42:52.296592   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m03
	I0729 10:42:52.296599   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:52.296609   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:52.296616   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:52.300446   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.496414   22547 request.go:629] Waited for 195.329562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:52.496475   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:52.496482   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:52.496492   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:52.496498   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:52.499842   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.500508   22547 pod_ready.go:92] pod "kube-controller-manager-ha-763049-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:52.500527   22547 pod_ready.go:81] duration metric: took 399.581822ms for pod "kube-controller-manager-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:52.500540   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mhbk7" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:52.696694   22547 request.go:629] Waited for 196.085782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhbk7
	I0729 10:42:52.696758   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhbk7
	I0729 10:42:52.696772   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:52.696793   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:52.696817   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:52.700067   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.897137   22547 request.go:629] Waited for 196.37844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:52.897195   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:52.897200   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:52.897208   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:52.897213   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:52.901001   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.901699   22547 pod_ready.go:92] pod "kube-proxy-mhbk7" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:52.901717   22547 pod_ready.go:81] duration metric: took 401.169546ms for pod "kube-proxy-mhbk7" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:52.901726   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tf7wt" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:53.096942   22547 request.go:629] Waited for 195.158447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf7wt
	I0729 10:42:53.096999   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf7wt
	I0729 10:42:53.097015   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:53.097044   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:53.097053   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:53.100648   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:53.296703   22547 request.go:629] Waited for 195.252275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:53.296773   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:53.296778   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:53.296788   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:53.296815   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:53.300568   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:53.301242   22547 pod_ready.go:92] pod "kube-proxy-tf7wt" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:53.301258   22547 pod_ready.go:81] duration metric: took 399.526279ms for pod "kube-proxy-tf7wt" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:53.301267   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xhcs8" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:53.496295   22547 request.go:629] Waited for 194.965389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xhcs8
	I0729 10:42:53.496364   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xhcs8
	I0729 10:42:53.496369   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:53.496376   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:53.496381   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:53.500456   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:53.696797   22547 request.go:629] Waited for 195.365519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:53.696863   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:53.696871   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:53.696879   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:53.696887   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:53.700540   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:53.700963   22547 pod_ready.go:92] pod "kube-proxy-xhcs8" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:53.700981   22547 pod_ready.go:81] duration metric: took 399.707109ms for pod "kube-proxy-xhcs8" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:53.700992   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:53.896974   22547 request.go:629] Waited for 195.91913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049
	I0729 10:42:53.897026   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049
	I0729 10:42:53.897031   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:53.897038   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:53.897043   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:53.900549   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:54.096572   22547 request.go:629] Waited for 195.420483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:54.096623   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:54.096629   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.096637   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.096641   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.100058   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:54.100648   22547 pod_ready.go:92] pod "kube-scheduler-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:54.100667   22547 pod_ready.go:81] duration metric: took 399.666776ms for pod "kube-scheduler-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:54.100678   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:54.297109   22547 request.go:629] Waited for 196.357909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m02
	I0729 10:42:54.297167   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m02
	I0729 10:42:54.297174   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.297184   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.297190   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.301126   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:54.497154   22547 request.go:629] Waited for 195.387946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:54.497229   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:54.497236   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.497247   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.497254   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.501472   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:54.501982   22547 pod_ready.go:92] pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:54.502001   22547 pod_ready.go:81] duration metric: took 401.314896ms for pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:54.502010   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:54.697078   22547 request.go:629] Waited for 194.982364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m03
	I0729 10:42:54.697145   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m03
	I0729 10:42:54.697152   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.697162   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.697171   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.701333   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:54.896252   22547 request.go:629] Waited for 194.300295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:54.896312   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:54.896319   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.896329   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.896335   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.900102   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:54.900648   22547 pod_ready.go:92] pod "kube-scheduler-ha-763049-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:54.900663   22547 pod_ready.go:81] duration metric: took 398.647202ms for pod "kube-scheduler-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:54.900675   22547 pod_ready.go:38] duration metric: took 5.200108915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:42:54.900695   22547 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:42:54.900753   22547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:42:54.918082   22547 api_server.go:72] duration metric: took 23.532561601s to wait for apiserver process to appear ...
	I0729 10:42:54.918105   22547 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:42:54.918124   22547 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0729 10:42:54.922482   22547 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0729 10:42:54.922555   22547 round_trippers.go:463] GET https://192.168.39.68:8443/version
	I0729 10:42:54.922567   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.922577   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.922582   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.923567   22547 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 10:42:54.923630   22547 api_server.go:141] control plane version: v1.30.3
	I0729 10:42:54.923647   22547 api_server.go:131] duration metric: took 5.534322ms to wait for apiserver health ...
	I0729 10:42:54.923658   22547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 10:42:55.096250   22547 request.go:629] Waited for 172.526278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:42:55.096308   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:42:55.096315   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:55.096325   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:55.096329   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:55.103099   22547 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 10:42:55.110339   22547 system_pods.go:59] 24 kube-system pods found
	I0729 10:42:55.110371   22547 system_pods.go:61] "coredns-7db6d8ff4d-l4n5p" [d8f32893-3406-4eed-990f-f490efab94d6] Running
	I0729 10:42:55.110375   22547 system_pods.go:61] "coredns-7db6d8ff4d-xxwnd" [76efda45-4871-46fb-8a27-2e94f75de9f4] Running
	I0729 10:42:55.110379   22547 system_pods.go:61] "etcd-ha-763049" [b16c09bc-d43b-4593-ae0c-a2b9f3500d64] Running
	I0729 10:42:55.110382   22547 system_pods.go:61] "etcd-ha-763049-m02" [8c207076-2477-4f60-8957-3a25961c47ae] Running
	I0729 10:42:55.110386   22547 system_pods.go:61] "etcd-ha-763049-m03" [204d285b-f87f-43e4-9ed4-af013eec6ec3] Running
	I0729 10:42:55.110389   22547 system_pods.go:61] "kindnet-567mx" [a6b03c26-f15c-49ba-9f6b-a487a9cf75e6] Running
	I0729 10:42:55.110391   22547 system_pods.go:61] "kindnet-596ll" [19026f3d-836d-499b-91fa-e1a957cbad76] Running
	I0729 10:42:55.110396   22547 system_pods.go:61] "kindnet-fdmh5" [4ed222fa-9517-42bb-bbde-6632f91bda05] Running
	I0729 10:42:55.110399   22547 system_pods.go:61] "kube-apiserver-ha-763049" [a6f83bf6-9aab-4141-8f71-b5080c69a2f5] Running
	I0729 10:42:55.110402   22547 system_pods.go:61] "kube-apiserver-ha-763049-m02" [541ec415-7d3c-4cc8-a357-89b8f58fedb4] Running
	I0729 10:42:55.110406   22547 system_pods.go:61] "kube-apiserver-ha-763049-m03" [c23bc29f-d338-4278-bd55-ff5bf69b54a7] Running
	I0729 10:42:55.110412   22547 system_pods.go:61] "kube-controller-manager-ha-763049" [0418a717-be1c-49d8-bf44-b702209730e1] Running
	I0729 10:42:55.110416   22547 system_pods.go:61] "kube-controller-manager-ha-763049-m02" [249b8e43-6745-4ddc-844b-901568a9a8b6] Running
	I0729 10:42:55.110423   22547 system_pods.go:61] "kube-controller-manager-ha-763049-m03" [f5992b20-fb58-45d6-8fd4-e377ad3ab86f] Running
	I0729 10:42:55.110430   22547 system_pods.go:61] "kube-proxy-mhbk7" [b05b91ac-ef64-4bd2-9824-83723bddfef7] Running
	I0729 10:42:55.110438   22547 system_pods.go:61] "kube-proxy-tf7wt" [c6875d82-c011-4b56-8b51-0ec9f8ddb78a] Running
	I0729 10:42:55.110442   22547 system_pods.go:61] "kube-proxy-xhcs8" [34b5c03d-5eee-43e6-84e4-4c99bc710966] Running
	I0729 10:42:55.110448   22547 system_pods.go:61] "kube-scheduler-ha-763049" [35288476-ae47-4955-bc9e-bdcf14642347] Running
	I0729 10:42:55.110457   22547 system_pods.go:61] "kube-scheduler-ha-763049-m02" [46df5d86-319e-4552-a864-802abcaf1376] Running
	I0729 10:42:55.110460   22547 system_pods.go:61] "kube-scheduler-ha-763049-m03" [e734bd61-8b59-4feb-8dba-be4621887225] Running
	I0729 10:42:55.110463   22547 system_pods.go:61] "kube-vip-ha-763049" [5f88bfd4-d887-4989-bf71-7a4459aa6655] Running
	I0729 10:42:55.110465   22547 system_pods.go:61] "kube-vip-ha-763049-m02" [f3ffc4c1-73a9-437c-87b2-b580550d4726] Running
	I0729 10:42:55.110468   22547 system_pods.go:61] "kube-vip-ha-763049-m03" [f4fadd4e-72f9-4506-b40b-35a8f6cc8dd4] Running
	I0729 10:42:55.110471   22547 system_pods.go:61] "storage-provisioner" [d48db391-d5bb-4974-88d7-f5c71e3edb4a] Running
	I0729 10:42:55.110478   22547 system_pods.go:74] duration metric: took 186.8141ms to wait for pod list to return data ...
	I0729 10:42:55.110488   22547 default_sa.go:34] waiting for default service account to be created ...
	I0729 10:42:55.296937   22547 request.go:629] Waited for 186.365135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0729 10:42:55.296993   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0729 10:42:55.296999   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:55.297007   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:55.297015   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:55.300360   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:55.300456   22547 default_sa.go:45] found service account: "default"
	I0729 10:42:55.300472   22547 default_sa.go:55] duration metric: took 189.975295ms for default service account to be created ...
	I0729 10:42:55.300482   22547 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 10:42:55.496927   22547 request.go:629] Waited for 196.365003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:42:55.496996   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:42:55.497004   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:55.497016   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:55.497027   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:55.504770   22547 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 10:42:55.511358   22547 system_pods.go:86] 24 kube-system pods found
	I0729 10:42:55.511385   22547 system_pods.go:89] "coredns-7db6d8ff4d-l4n5p" [d8f32893-3406-4eed-990f-f490efab94d6] Running
	I0729 10:42:55.511391   22547 system_pods.go:89] "coredns-7db6d8ff4d-xxwnd" [76efda45-4871-46fb-8a27-2e94f75de9f4] Running
	I0729 10:42:55.511395   22547 system_pods.go:89] "etcd-ha-763049" [b16c09bc-d43b-4593-ae0c-a2b9f3500d64] Running
	I0729 10:42:55.511400   22547 system_pods.go:89] "etcd-ha-763049-m02" [8c207076-2477-4f60-8957-3a25961c47ae] Running
	I0729 10:42:55.511404   22547 system_pods.go:89] "etcd-ha-763049-m03" [204d285b-f87f-43e4-9ed4-af013eec6ec3] Running
	I0729 10:42:55.511408   22547 system_pods.go:89] "kindnet-567mx" [a6b03c26-f15c-49ba-9f6b-a487a9cf75e6] Running
	I0729 10:42:55.511412   22547 system_pods.go:89] "kindnet-596ll" [19026f3d-836d-499b-91fa-e1a957cbad76] Running
	I0729 10:42:55.511415   22547 system_pods.go:89] "kindnet-fdmh5" [4ed222fa-9517-42bb-bbde-6632f91bda05] Running
	I0729 10:42:55.511419   22547 system_pods.go:89] "kube-apiserver-ha-763049" [a6f83bf6-9aab-4141-8f71-b5080c69a2f5] Running
	I0729 10:42:55.511423   22547 system_pods.go:89] "kube-apiserver-ha-763049-m02" [541ec415-7d3c-4cc8-a357-89b8f58fedb4] Running
	I0729 10:42:55.511427   22547 system_pods.go:89] "kube-apiserver-ha-763049-m03" [c23bc29f-d338-4278-bd55-ff5bf69b54a7] Running
	I0729 10:42:55.511432   22547 system_pods.go:89] "kube-controller-manager-ha-763049" [0418a717-be1c-49d8-bf44-b702209730e1] Running
	I0729 10:42:55.511437   22547 system_pods.go:89] "kube-controller-manager-ha-763049-m02" [249b8e43-6745-4ddc-844b-901568a9a8b6] Running
	I0729 10:42:55.511442   22547 system_pods.go:89] "kube-controller-manager-ha-763049-m03" [f5992b20-fb58-45d6-8fd4-e377ad3ab86f] Running
	I0729 10:42:55.511445   22547 system_pods.go:89] "kube-proxy-mhbk7" [b05b91ac-ef64-4bd2-9824-83723bddfef7] Running
	I0729 10:42:55.511453   22547 system_pods.go:89] "kube-proxy-tf7wt" [c6875d82-c011-4b56-8b51-0ec9f8ddb78a] Running
	I0729 10:42:55.511457   22547 system_pods.go:89] "kube-proxy-xhcs8" [34b5c03d-5eee-43e6-84e4-4c99bc710966] Running
	I0729 10:42:55.511463   22547 system_pods.go:89] "kube-scheduler-ha-763049" [35288476-ae47-4955-bc9e-bdcf14642347] Running
	I0729 10:42:55.511466   22547 system_pods.go:89] "kube-scheduler-ha-763049-m02" [46df5d86-319e-4552-a864-802abcaf1376] Running
	I0729 10:42:55.511470   22547 system_pods.go:89] "kube-scheduler-ha-763049-m03" [e734bd61-8b59-4feb-8dba-be4621887225] Running
	I0729 10:42:55.511476   22547 system_pods.go:89] "kube-vip-ha-763049" [5f88bfd4-d887-4989-bf71-7a4459aa6655] Running
	I0729 10:42:55.511480   22547 system_pods.go:89] "kube-vip-ha-763049-m02" [f3ffc4c1-73a9-437c-87b2-b580550d4726] Running
	I0729 10:42:55.511483   22547 system_pods.go:89] "kube-vip-ha-763049-m03" [f4fadd4e-72f9-4506-b40b-35a8f6cc8dd4] Running
	I0729 10:42:55.511487   22547 system_pods.go:89] "storage-provisioner" [d48db391-d5bb-4974-88d7-f5c71e3edb4a] Running
	I0729 10:42:55.511494   22547 system_pods.go:126] duration metric: took 211.008008ms to wait for k8s-apps to be running ...
	I0729 10:42:55.511502   22547 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 10:42:55.511546   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:42:55.527956   22547 system_svc.go:56] duration metric: took 16.443555ms WaitForService to wait for kubelet
	I0729 10:42:55.527997   22547 kubeadm.go:582] duration metric: took 24.142473175s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:42:55.528024   22547 node_conditions.go:102] verifying NodePressure condition ...
	I0729 10:42:55.696510   22547 request.go:629] Waited for 168.417638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes
	I0729 10:42:55.696562   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes
	I0729 10:42:55.696569   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:55.696577   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:55.696582   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:55.700452   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:55.701677   22547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:42:55.701699   22547 node_conditions.go:123] node cpu capacity is 2
	I0729 10:42:55.701711   22547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:42:55.701715   22547 node_conditions.go:123] node cpu capacity is 2
	I0729 10:42:55.701719   22547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:42:55.701722   22547 node_conditions.go:123] node cpu capacity is 2
	I0729 10:42:55.701726   22547 node_conditions.go:105] duration metric: took 173.697622ms to run NodePressure ...
	I0729 10:42:55.701740   22547 start.go:241] waiting for startup goroutines ...
	I0729 10:42:55.701761   22547 start.go:255] writing updated cluster config ...
	I0729 10:42:55.702068   22547 ssh_runner.go:195] Run: rm -f paused
	I0729 10:42:55.755723   22547 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 10:42:55.758083   22547 out.go:177] * Done! kubectl is now configured to use "ha-763049" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.223733969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722249780144150734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606596453808,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606590846437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47,PodSandboxId:55dc127b99c57f6fc7c05f6ee8f2e16bc6179ddd6eb632634cf88acf496572da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722249606500055312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722249594537338342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172224959
0199139099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9,PodSandboxId:d0a05f086a85bd412b01fda75459ba0b28652084bbf87bdb6086ef430e546817,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222495715
37927164,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ed60b027789c76247cc6cad30afff1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722249568437047287,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804,PodSandboxId:bdac6ea650c04df4df36afce008eb934ce4666b4a02de5943706bddb6f71ae1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722249568387025631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722249568433190612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d,PodSandboxId:09676447e6cf05d6e7675e61b753825fc00824e88a41bc41b8a57eba297d8f5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722249568286225141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87fc08f3-69d7-4c9a-9e29-e54fdc025cb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.234319754Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=899a1ca0-0450-475b-a7ec-fdbfd0db0101 name=/runtime.v1.ImageService/ListImages
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.235248517Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315],Size_:117609954,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7 registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e],Size_:112198984,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{I
d:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266 registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4],Size_:63051080,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,RepoTags:[registry.k8s.io/kube-proxy:v1.30.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80 registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65],Size_:85953945,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 reg
istry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,RepoTags:[docker.io/kindest/kindnetd:v20240715-585640e9],RepoDigests:[docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115 docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493],Size_:87165492,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kube
-vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,RepoTags:[docker.io/kindest/kindnetd:v20240719-e7903573],RepoDigests:[docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9 docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a],Size_:87174707,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=899a1ca0-0450-475b-a7ec-fdbfd0db0101 name=/runtime
.v1.ImageService/ListImages
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.249018257Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fab9610b-e5a1-46cb-88f4-bc2d1c6051c9 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.249089390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fab9610b-e5a1-46cb-88f4-bc2d1c6051c9 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.250131652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=add5275b-0142-4915-9316-068d3aaee91e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.250612160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722249994250545614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=add5275b-0142-4915-9316-068d3aaee91e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.251137811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b9a06fc-f5a5-42e0-be4e-3c3ede53f5b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.251190315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b9a06fc-f5a5-42e0-be4e-3c3ede53f5b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.251456813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722249780144150734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606596453808,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606590846437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47,PodSandboxId:55dc127b99c57f6fc7c05f6ee8f2e16bc6179ddd6eb632634cf88acf496572da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722249606500055312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722249594537338342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172224959
0199139099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9,PodSandboxId:d0a05f086a85bd412b01fda75459ba0b28652084bbf87bdb6086ef430e546817,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222495715
37927164,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ed60b027789c76247cc6cad30afff1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722249568437047287,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804,PodSandboxId:bdac6ea650c04df4df36afce008eb934ce4666b4a02de5943706bddb6f71ae1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722249568387025631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722249568433190612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d,PodSandboxId:09676447e6cf05d6e7675e61b753825fc00824e88a41bc41b8a57eba297d8f5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722249568286225141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b9a06fc-f5a5-42e0-be4e-3c3ede53f5b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.261546057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac8591c8-9f65-4fea-b0d2-01738013b1e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.261609282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac8591c8-9f65-4fea-b0d2-01738013b1e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.261939352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722249780144150734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606596453808,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606590846437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47,PodSandboxId:55dc127b99c57f6fc7c05f6ee8f2e16bc6179ddd6eb632634cf88acf496572da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722249606500055312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722249594537338342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172224959
0199139099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9,PodSandboxId:d0a05f086a85bd412b01fda75459ba0b28652084bbf87bdb6086ef430e546817,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222495715
37927164,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ed60b027789c76247cc6cad30afff1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722249568437047287,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804,PodSandboxId:bdac6ea650c04df4df36afce008eb934ce4666b4a02de5943706bddb6f71ae1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722249568387025631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722249568433190612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d,PodSandboxId:09676447e6cf05d6e7675e61b753825fc00824e88a41bc41b8a57eba297d8f5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722249568286225141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac8591c8-9f65-4fea-b0d2-01738013b1e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.262603857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61c4632a-52da-4d2f-8982-8265e873a440 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.262664748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61c4632a-52da-4d2f-8982-8265e873a440 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.262934127Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722249780144150734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606596453808,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606590846437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47,PodSandboxId:55dc127b99c57f6fc7c05f6ee8f2e16bc6179ddd6eb632634cf88acf496572da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722249606500055312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722249594537338342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172224959
0199139099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9,PodSandboxId:d0a05f086a85bd412b01fda75459ba0b28652084bbf87bdb6086ef430e546817,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222495715
37927164,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ed60b027789c76247cc6cad30afff1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722249568437047287,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804,PodSandboxId:bdac6ea650c04df4df36afce008eb934ce4666b4a02de5943706bddb6f71ae1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722249568387025631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722249568433190612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d,PodSandboxId:09676447e6cf05d6e7675e61b753825fc00824e88a41bc41b8a57eba297d8f5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722249568286225141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61c4632a-52da-4d2f-8982-8265e873a440 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.263687983Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=274c645b-e100-40c8-8f90-6563ea313481 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.263972240Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-6s8vm,Uid:f58d0d09-9e1d-4e80-917d-92b1264a6609,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722249777009344084,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T10:42:56.684559917Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xxwnd,Uid:76efda45-4871-46fb-8a27-2e94f75de9f4,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1722249606307825668,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T10:40:05.985801644Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:55dc127b99c57f6fc7c05f6ee8f2e16bc6179ddd6eb632634cf88acf496572da,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d48db391-d5bb-4974-88d7-f5c71e3edb4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722249606288114353,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T10:40:05.982133689Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-l4n5p,Uid:d8f32893-3406-4eed-990f-f490efab94d6,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1722249606281620960,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T10:40:05.971342595Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&PodSandboxMetadata{Name:kube-proxy-mhbk7,Uid:b05b91ac-ef64-4bd2-9824-83723bddfef7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722249590093566264,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-29T10:39:48.286297478Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&PodSandboxMetadata{Name:kindnet-fdmh5,Uid:4ed222fa-9517-42bb-bbde-6632f91bda05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722249589495819854,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T10:39:48.288595490Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&PodSandboxMetadata{Name:etcd-ha-763049,Uid:6d81a67c9133ee28f571d62ecf0564ce,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1722249568156860656,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.68:2379,kubernetes.io/config.hash: 6d81a67c9133ee28f571d62ecf0564ce,kubernetes.io/config.seen: 2024-07-29T10:39:27.072845512Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-763049,Uid:a4c95936a93178bab407fb9d8697650f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722249568154024565,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93
178bab407fb9d8697650f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a4c95936a93178bab407fb9d8697650f,kubernetes.io/config.seen: 2024-07-29T10:39:27.072849703Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bdac6ea650c04df4df36afce008eb934ce4666b4a02de5943706bddb6f71ae1e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-763049,Uid:6112007873a5488ffeba87ad2297372e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722249568133145825,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.68:8443,kubernetes.io/config.hash: 6112007873a5488ffeba87ad2297372e,kubernetes.io/config.seen: 2024-07-29T10:39:27.072847049Z,kubernetes.io/config.source: file
,},RuntimeHandler:,},&PodSandbox{Id:d0a05f086a85bd412b01fda75459ba0b28652084bbf87bdb6086ef430e546817,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-763049,Uid:34ed60b027789c76247cc6cad30afff1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722249568130175534,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ed60b027789c76247cc6cad30afff1,},Annotations:map[string]string{kubernetes.io/config.hash: 34ed60b027789c76247cc6cad30afff1,kubernetes.io/config.seen: 2024-07-29T10:39:27.072840978Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09676447e6cf05d6e7675e61b753825fc00824e88a41bc41b8a57eba297d8f5a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-763049,Uid:400e7dce88577de760f73261cae49d02,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722249568128530953,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.con
tainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 400e7dce88577de760f73261cae49d02,kubernetes.io/config.seen: 2024-07-29T10:39:27.072848459Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=274c645b-e100-40c8-8f90-6563ea313481 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.294789637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca7c943a-ad23-4348-ae1d-2f59112aa343 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.294862411Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca7c943a-ad23-4348-ae1d-2f59112aa343 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.295719640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5206adc7-06f4-4e49-a9db-f6c7ad2593df name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.296210442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722249994296191429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5206adc7-06f4-4e49-a9db-f6c7ad2593df name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.296696481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dafd5ea1-e1bd-4f4d-b41e-ba1b8cebd806 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.296813092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dafd5ea1-e1bd-4f4d-b41e-ba1b8cebd806 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:46:34 ha-763049 crio[679]: time="2024-07-29 10:46:34.297046475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722249780144150734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606596453808,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606590846437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47,PodSandboxId:55dc127b99c57f6fc7c05f6ee8f2e16bc6179ddd6eb632634cf88acf496572da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722249606500055312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722249594537338342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172224959
0199139099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9,PodSandboxId:d0a05f086a85bd412b01fda75459ba0b28652084bbf87bdb6086ef430e546817,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222495715
37927164,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ed60b027789c76247cc6cad30afff1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722249568437047287,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804,PodSandboxId:bdac6ea650c04df4df36afce008eb934ce4666b4a02de5943706bddb6f71ae1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722249568387025631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722249568433190612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d,PodSandboxId:09676447e6cf05d6e7675e61b753825fc00824e88a41bc41b8a57eba297d8f5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722249568286225141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dafd5ea1-e1bd-4f4d-b41e-ba1b8cebd806 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1cbf3ef31451       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   317257a7e3939       busybox-fc5497c4f-6s8vm
	5d7c5ba61589d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   83fa37df3ce80       coredns-7db6d8ff4d-xxwnd
	d2f12f3773838       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   1b78edf6f66dc       coredns-7db6d8ff4d-l4n5p
	752618ed171ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   55dc127b99c57       storage-provisioner
	d9b83381cff6c       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   ba95977795c59       kindnet-fdmh5
	db640a7c00be2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   374e9c4294dfb       kube-proxy-mhbk7
	25081f768fa7c       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   d0a05f086a85b       kube-vip-ha-763049
	46540b0fd864e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   d0a2c28776819       etcd-ha-763049
	c31bbb31aa5f3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   d4383fe572e51       kube-scheduler-ha-763049
	e1dddce207d23       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   bdac6ea650c04       kube-apiserver-ha-763049
	5a0bf98403fc7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   09676447e6cf0       kube-controller-manager-ha-763049
	
	
	==> coredns [5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5] <==
	[INFO] 10.244.1.2:42496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001766762s
	[INFO] 10.244.0.4:47800 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.014284272s
	[INFO] 10.244.2.2:36542 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233506s
	[INFO] 10.244.2.2:35802 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170528s
	[INFO] 10.244.2.2:33377 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000253157s
	[INFO] 10.244.1.2:43934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123259s
	[INFO] 10.244.1.2:52875 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143409s
	[INFO] 10.244.1.2:46242 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001561535s
	[INFO] 10.244.1.2:50316 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101745s
	[INFO] 10.244.1.2:44298 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140588s
	[INFO] 10.244.1.2:41448 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158036s
	[INFO] 10.244.0.4:38730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084044s
	[INFO] 10.244.0.4:57968 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085926s
	[INFO] 10.244.0.4:42578 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062705s
	[INFO] 10.244.2.2:38441 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139508s
	[INFO] 10.244.2.2:50163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168308s
	[INFO] 10.244.1.2:42467 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125757s
	[INFO] 10.244.1.2:39047 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140115s
	[INFO] 10.244.1.2:37057 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091358s
	[INFO] 10.244.0.4:60045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128601s
	[INFO] 10.244.0.4:32850 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078977s
	[INFO] 10.244.2.2:46995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149775s
	[INFO] 10.244.2.2:60584 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126839s
	[INFO] 10.244.2.2:54400 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169256s
	[INFO] 10.244.1.2:44674 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109219s
	
	
	==> coredns [d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28] <==
	[INFO] 10.244.1.2:52525 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001622123s
	[INFO] 10.244.0.4:60906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156575s
	[INFO] 10.244.0.4:46156 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004679676s
	[INFO] 10.244.0.4:53576 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000235312s
	[INFO] 10.244.0.4:58447 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097688s
	[INFO] 10.244.0.4:60709 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157374s
	[INFO] 10.244.0.4:54900 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012654s
	[INFO] 10.244.0.4:45290 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152164s
	[INFO] 10.244.2.2:52050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184737s
	[INFO] 10.244.2.2:53059 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002292108s
	[INFO] 10.244.2.2:42700 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122981s
	[INFO] 10.244.2.2:44006 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001526846s
	[INFO] 10.244.2.2:41802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169435s
	[INFO] 10.244.1.2:49560 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002026135s
	[INFO] 10.244.1.2:49037 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111642s
	[INFO] 10.244.0.4:56631 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201091s
	[INFO] 10.244.2.2:47071 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000231291s
	[INFO] 10.244.2.2:53040 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132462s
	[INFO] 10.244.1.2:50475 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008294s
	[INFO] 10.244.0.4:60819 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000157328s
	[INFO] 10.244.0.4:41267 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078502s
	[INFO] 10.244.2.2:59469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127405s
	[INFO] 10.244.1.2:46106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125503s
	[INFO] 10.244.1.2:58330 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150547s
	[INFO] 10.244.1.2:40880 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136941s
	
	
	==> describe nodes <==
	Name:               ha-763049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T10_39_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:39:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:46:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:43:08 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:43:08 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:43:08 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:43:08 +0000   Mon, 29 Jul 2024 10:40:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-763049
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03aa097434f1466280c9076799e841fb
	  System UUID:                03aa0974-34f1-4662-80c9-076799e841fb
	  Boot ID:                    efb539a5-e8b0-4a05-a8f7-bc957e281bdb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6s8vm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-7db6d8ff4d-l4n5p             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m46s
	  kube-system                 coredns-7db6d8ff4d-xxwnd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m46s
	  kube-system                 etcd-ha-763049                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m
	  kube-system                 kindnet-fdmh5                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m46s
	  kube-system                 kube-apiserver-ha-763049             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-controller-manager-ha-763049    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-proxy-mhbk7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-scheduler-ha-763049             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-vip-ha-763049                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m44s  kube-proxy       
	  Normal  Starting                 7m     kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m     kubelet          Node ha-763049 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m     kubelet          Node ha-763049 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m     kubelet          Node ha-763049 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m47s  node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal  NodeReady                6m29s  kubelet          Node ha-763049 status is now: NodeReady
	  Normal  RegisteredNode           5m5s   node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal  RegisteredNode           3m48s  node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	
	
	Name:               ha-763049-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_41_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:41:11 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:44:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 10:43:13 +0000   Mon, 29 Jul 2024 10:44:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 10:43:13 +0000   Mon, 29 Jul 2024 10:44:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 10:43:13 +0000   Mon, 29 Jul 2024 10:44:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 10:43:13 +0000   Mon, 29 Jul 2024 10:44:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-763049-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa1e337eb2824257a3354f0f8d3704f1
	  System UUID:                fa1e337e-b282-4257-a335-4f0f8d3704f1
	  Boot ID:                    8c97f7bd-1d8a-4627-9b71-d303a32f0197
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v8wqv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-763049-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m21s
	  kube-system                 kindnet-596ll                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m23s
	  kube-system                 kube-apiserver-ha-763049-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-controller-manager-ha-763049-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-proxy-tf7wt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-scheduler-ha-763049-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-vip-ha-763049-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m23s (x8 over 5m23s)  kubelet          Node ha-763049-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m23s (x8 over 5m23s)  kubelet          Node ha-763049-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s (x7 over 5m23s)  kubelet          Node ha-763049-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-763049-m02 status is now: NodeNotReady
	
	
	Name:               ha-763049-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_42_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:42:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:46:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:43:28 +0000   Mon, 29 Jul 2024 10:42:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:43:28 +0000   Mon, 29 Jul 2024 10:42:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:43:28 +0000   Mon, 29 Jul 2024 10:42:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:43:28 +0000   Mon, 29 Jul 2024 10:42:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-763049-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c75d5420424454e9176a9ed33c59890
	  System UUID:                3c75d542-0424-454e-9176-a9ed33c59890
	  Boot ID:                    7049dd80-8dc2-4fef-8f1b-67f92b461bf7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bsjch                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-763049-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m5s
	  kube-system                 kindnet-567mx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m7s
	  kube-system                 kube-apiserver-ha-763049-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-controller-manager-ha-763049-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-proxy-xhcs8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-scheduler-ha-763049-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-vip-ha-763049-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node ha-763049-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node ha-763049-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node ha-763049-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	  Normal  RegisteredNode           3m48s                node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	
	
	Name:               ha-763049-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_43_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:43:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:46:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:44:03 +0000   Mon, 29 Jul 2024 10:43:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:44:03 +0000   Mon, 29 Jul 2024 10:43:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:44:03 +0000   Mon, 29 Jul 2024 10:43:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:44:03 +0000   Mon, 29 Jul 2024 10:43:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-763049-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 656290ccdc044847a0820e68660df2c3
	  System UUID:                656290cc-dc04-4847-a082-0e68660df2c3
	  Boot ID:                    04ec9d00-e96f-4bc7-8146-b4b2850b5c36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fq6mz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-9d6sv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-763049-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-763049-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-763049-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-763049-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050772] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040355] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.807627] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul29 10:39] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.616522] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.480584] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.055472] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054857] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.202085] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.132761] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.281350] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.343760] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.067157] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.957567] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +1.681727] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.722604] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.080303] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.543544] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.094019] kauditd_printk_skb: 29 callbacks suppressed
	[Jul29 10:41] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8] <==
	{"level":"warn","ts":"2024-07-29T10:46:34.593674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.602004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.60847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.628207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.630902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.631014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.633372Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.641672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.648838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.653223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.657632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.665691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.674071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.681018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.684541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.687909Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.695267Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.701199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.707284Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.710905Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.713847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.719472Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.726119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.731148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:46:34.733303Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:46:34 up 7 min,  0 users,  load average: 0.10, 0.23, 0.13
	Linux ha-763049 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa] <==
	I0729 10:45:55.642267       1 main.go:299] handling current node
	I0729 10:46:05.644332       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:46:05.644434       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:46:05.644572       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:46:05.644636       1 main.go:299] handling current node
	I0729 10:46:05.644664       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:46:05.644681       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:46:05.644857       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:46:05.644911       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:46:15.642017       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:46:15.642128       1 main.go:299] handling current node
	I0729 10:46:15.642157       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:46:15.642178       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:46:15.642310       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:46:15.642338       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:46:15.642395       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:46:15.642413       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:46:25.648130       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:46:25.648175       1 main.go:299] handling current node
	I0729 10:46:25.648189       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:46:25.648195       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:46:25.648339       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:46:25.648345       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:46:25.648411       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:46:25.648437       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804] <==
	I0729 10:39:33.203550       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0729 10:39:33.211146       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.68]
	I0729 10:39:33.212101       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 10:39:33.220375       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 10:39:33.452319       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 10:39:34.074590       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 10:39:34.103106       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 10:39:34.260447       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 10:39:47.632194       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 10:39:48.232408       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0729 10:43:01.615081       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38810: use of closed network connection
	E0729 10:43:01.802683       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38826: use of closed network connection
	E0729 10:43:02.210291       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38852: use of closed network connection
	E0729 10:43:02.404606       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38864: use of closed network connection
	E0729 10:43:02.598042       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38870: use of closed network connection
	E0729 10:43:02.775338       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38898: use of closed network connection
	E0729 10:43:02.960424       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38908: use of closed network connection
	E0729 10:43:03.134733       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38936: use of closed network connection
	E0729 10:43:03.500029       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38968: use of closed network connection
	E0729 10:43:03.696500       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38984: use of closed network connection
	E0729 10:43:03.895036       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39014: use of closed network connection
	E0729 10:43:04.075453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39034: use of closed network connection
	E0729 10:43:04.292078       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39052: use of closed network connection
	E0729 10:43:04.475009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39070: use of closed network connection
	W0729 10:44:23.228214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.123 192.168.39.68]
	
	
	==> kube-controller-manager [5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d] <==
	I0729 10:42:56.700335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.215551ms"
	I0729 10:42:56.731015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.55052ms"
	I0729 10:42:56.731153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.175µs"
	I0729 10:42:56.820308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.599968ms"
	I0729 10:42:57.060119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="239.672353ms"
	I0729 10:42:57.093257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.062424ms"
	I0729 10:42:57.093547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.558µs"
	I0729 10:42:57.628481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.065µs"
	I0729 10:43:00.653647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.966383ms"
	I0729 10:43:00.653972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.293µs"
	I0729 10:43:00.883824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.583563ms"
	I0729 10:43:00.886531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="190.864µs"
	I0729 10:43:01.176070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.979016ms"
	I0729 10:43:01.176378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.934µs"
	E0729 10:43:32.684447       1 certificate_controller.go:146] Sync csr-j7k5b failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-j7k5b": the object has been modified; please apply your changes to the latest version and try again
	E0729 10:43:32.693102       1 certificate_controller.go:146] Sync csr-j7k5b failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-j7k5b": the object has been modified; please apply your changes to the latest version and try again
	I0729 10:43:32.970908       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-763049-m04\" does not exist"
	I0729 10:43:33.043160       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-763049-m04" podCIDRs=["10.244.3.0/24"]
	E0729 10:43:33.254956       1 daemon_controller.go:324] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"bb8f6b29-8161-4ae0-ab22-08c44d6649ac", ResourceVersion:"979", Generation:1, CreationTimestamp:time.Date(2024, time.July, 29, 10, 39, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\
":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240719-e7903573\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostP
ath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00157a300), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", Vo
lumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00167d368), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1
.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00167d380), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), Down
wardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00167d398), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.IS
CSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Containe
r{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240719-e7903573", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00157a320)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00157a3a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:res
ource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(
*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001e2f7a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001933fa8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ddb780), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, H
ostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00244a440)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001933ff0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on
daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0729 10:43:37.505713       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-763049-m04"
	I0729 10:43:53.587235       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-763049-m04"
	I0729 10:44:46.458598       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-763049-m04"
	I0729 10:44:46.607463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.616045ms"
	I0729 10:44:46.607873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.186µs"
	
	
	==> kube-proxy [db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8] <==
	I0729 10:39:50.501329       1 server_linux.go:69] "Using iptables proxy"
	I0729 10:39:50.522003       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.68"]
	I0729 10:39:50.563814       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 10:39:50.563898       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 10:39:50.563928       1 server_linux.go:165] "Using iptables Proxier"
	I0729 10:39:50.567489       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 10:39:50.568044       1 server.go:872] "Version info" version="v1.30.3"
	I0729 10:39:50.568111       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:39:50.570496       1 config.go:192] "Starting service config controller"
	I0729 10:39:50.570866       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 10:39:50.570939       1 config.go:101] "Starting endpoint slice config controller"
	I0729 10:39:50.570966       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 10:39:50.571980       1 config.go:319] "Starting node config controller"
	I0729 10:39:50.572922       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 10:39:50.671502       1 shared_informer.go:320] Caches are synced for service config
	I0729 10:39:50.671509       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 10:39:50.673320       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3] <==
	W0729 10:39:32.512094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 10:39:32.512143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 10:39:32.523638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 10:39:32.523704       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 10:39:32.555294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 10:39:32.555481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 10:39:32.582700       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 10:39:32.582793       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 10:39:32.598070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 10:39:32.598125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 10:39:32.659733       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 10:39:32.659946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 10:39:32.766696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 10:39:32.766785       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0729 10:39:34.410320       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 10:42:27.739135       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xhcs8\": pod kube-proxy-xhcs8 is already assigned to node \"ha-763049-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xhcs8" node="ha-763049-m03"
	E0729 10:42:27.740202       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xhcs8\": pod kube-proxy-xhcs8 is already assigned to node \"ha-763049-m03\"" pod="kube-system/kube-proxy-xhcs8"
	E0729 10:43:33.078903       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9d6sv\": pod kube-proxy-9d6sv is already assigned to node \"ha-763049-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9d6sv" node="ha-763049-m04"
	E0729 10:43:33.079019       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e99732d0-022f-4401-80cf-44def167bfba(kube-system/kube-proxy-9d6sv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9d6sv"
	E0729 10:43:33.079720       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9d6sv\": pod kube-proxy-9d6sv is already assigned to node \"ha-763049-m04\"" pod="kube-system/kube-proxy-9d6sv"
	I0729 10:43:33.079818       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9d6sv" node="ha-763049-m04"
	E0729 10:43:33.081154       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fq6mz\": pod kindnet-fq6mz is already assigned to node \"ha-763049-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fq6mz" node="ha-763049-m04"
	E0729 10:43:33.081240       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d049f5b4-d534-4e5c-8a0b-8734d15853c5(kube-system/kindnet-fq6mz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fq6mz"
	E0729 10:43:33.081267       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fq6mz\": pod kindnet-fq6mz is already assigned to node \"ha-763049-m04\"" pod="kube-system/kindnet-fq6mz"
	I0729 10:43:33.081293       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fq6mz" node="ha-763049-m04"
	
	
	==> kubelet <==
	Jul 29 10:42:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:42:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:42:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:42:56 ha-763049 kubelet[1375]: I0729 10:42:56.685104    1375 topology_manager.go:215] "Topology Admit Handler" podUID="f58d0d09-9e1d-4e80-917d-92b1264a6609" podNamespace="default" podName="busybox-fc5497c4f-6s8vm"
	Jul 29 10:42:56 ha-763049 kubelet[1375]: I0729 10:42:56.804316    1375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9hqq\" (UniqueName: \"kubernetes.io/projected/f58d0d09-9e1d-4e80-917d-92b1264a6609-kube-api-access-x9hqq\") pod \"busybox-fc5497c4f-6s8vm\" (UID: \"f58d0d09-9e1d-4e80-917d-92b1264a6609\") " pod="default/busybox-fc5497c4f-6s8vm"
	Jul 29 10:43:34 ha-763049 kubelet[1375]: E0729 10:43:34.240676    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:43:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:43:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:43:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:43:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:44:34 ha-763049 kubelet[1375]: E0729 10:44:34.241986    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:44:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:44:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:44:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:44:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:45:34 ha-763049 kubelet[1375]: E0729 10:45:34.241995    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:45:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:45:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:45:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:45:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:46:34 ha-763049 kubelet[1375]: E0729 10:46:34.245481    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:46:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:46:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:46:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:46:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-763049 -n ha-763049
helpers_test.go:261: (dbg) Run:  kubectl --context ha-763049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr: exit status 3 (3.225539633s)

                                                
                                                
-- stdout --
	ha-763049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-763049-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:39.376840   27518 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:39.377084   27518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:39.377094   27518 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:39.377098   27518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:39.377346   27518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:46:39.377552   27518 out.go:298] Setting JSON to false
	I0729 10:46:39.377584   27518 mustload.go:65] Loading cluster: ha-763049
	I0729 10:46:39.377618   27518 notify.go:220] Checking for updates...
	I0729 10:46:39.377993   27518 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:46:39.378008   27518 status.go:255] checking status of ha-763049 ...
	I0729 10:46:39.378400   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:39.378467   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:39.398502   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44385
	I0729 10:46:39.399011   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:39.399633   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:39.399656   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:39.400021   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:39.400250   27518 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:46:39.401902   27518 status.go:330] ha-763049 host status = "Running" (err=<nil>)
	I0729 10:46:39.401929   27518 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:46:39.402284   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:39.402325   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:39.417528   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45203
	I0729 10:46:39.418002   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:39.418472   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:39.418497   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:39.418860   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:39.419087   27518 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:46:39.421843   27518 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:39.422316   27518 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:46:39.422342   27518 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:39.422486   27518 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:46:39.422838   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:39.422882   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:39.438034   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0729 10:46:39.438470   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:39.438917   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:39.438950   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:39.439295   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:39.439523   27518 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:46:39.439754   27518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:39.439781   27518 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:46:39.442866   27518 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:39.443332   27518 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:46:39.443358   27518 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:39.443508   27518 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:46:39.443711   27518 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:46:39.443850   27518 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:46:39.444015   27518 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:46:39.522258   27518 ssh_runner.go:195] Run: systemctl --version
	I0729 10:46:39.528584   27518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:39.544513   27518 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:46:39.544544   27518 api_server.go:166] Checking apiserver status ...
	I0729 10:46:39.544625   27518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:46:39.559594   27518 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	W0729 10:46:39.569469   27518 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:46:39.569518   27518 ssh_runner.go:195] Run: ls
	I0729 10:46:39.574748   27518 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:46:39.578838   27518 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:46:39.578858   27518 status.go:422] ha-763049 apiserver status = Running (err=<nil>)
	I0729 10:46:39.578868   27518 status.go:257] ha-763049 status: &{Name:ha-763049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:39.578883   27518 status.go:255] checking status of ha-763049-m02 ...
	I0729 10:46:39.579168   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:39.579198   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:39.594863   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0729 10:46:39.595271   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:39.595705   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:39.595725   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:39.596068   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:39.596277   27518 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:46:39.597705   27518 status.go:330] ha-763049-m02 host status = "Running" (err=<nil>)
	I0729 10:46:39.597723   27518 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:46:39.597999   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:39.598033   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:39.612706   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41639
	I0729 10:46:39.613138   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:39.613600   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:39.613625   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:39.613956   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:39.614169   27518 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:46:39.617092   27518 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:39.617530   27518 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:46:39.617559   27518 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:39.617735   27518 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:46:39.618216   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:39.618265   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:39.634528   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I0729 10:46:39.635014   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:39.635544   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:39.635568   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:39.635889   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:39.636109   27518 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:46:39.636286   27518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:39.636303   27518 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:46:39.639321   27518 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:39.639834   27518 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:46:39.639863   27518 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:39.639996   27518 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:46:39.640182   27518 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:46:39.640336   27518 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:46:39.640462   27518 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	W0729 10:46:42.195083   27518 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0729 10:46:42.195188   27518 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0729 10:46:42.195206   27518 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:42.195218   27518 status.go:257] ha-763049-m02 status: &{Name:ha-763049-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:46:42.195242   27518 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:42.195260   27518 status.go:255] checking status of ha-763049-m03 ...
	I0729 10:46:42.195680   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:42.195731   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:42.210924   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
	I0729 10:46:42.211310   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:42.211794   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:42.211822   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:42.212227   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:42.212443   27518 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:46:42.213978   27518 status.go:330] ha-763049-m03 host status = "Running" (err=<nil>)
	I0729 10:46:42.213995   27518 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:46:42.214304   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:42.214347   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:42.229070   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39363
	I0729 10:46:42.229497   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:42.229958   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:42.229979   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:42.230309   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:42.230528   27518 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:46:42.233701   27518 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:42.234188   27518 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:46:42.234205   27518 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:42.234371   27518 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:46:42.234678   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:42.234736   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:42.249394   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I0729 10:46:42.249869   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:42.250500   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:42.250523   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:42.250903   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:42.251125   27518 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:46:42.251320   27518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:42.251340   27518 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:46:42.254755   27518 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:42.255270   27518 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:46:42.255302   27518 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:42.255406   27518 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:46:42.255602   27518 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:46:42.255755   27518 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:46:42.255902   27518 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:46:42.342707   27518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:42.361316   27518 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:46:42.361347   27518 api_server.go:166] Checking apiserver status ...
	I0729 10:46:42.361384   27518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:46:42.377359   27518 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup
	W0729 10:46:42.389324   27518 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:46:42.389376   27518 ssh_runner.go:195] Run: ls
	I0729 10:46:42.394334   27518 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:46:42.398782   27518 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:46:42.398801   27518 status.go:422] ha-763049-m03 apiserver status = Running (err=<nil>)
	I0729 10:46:42.398808   27518 status.go:257] ha-763049-m03 status: &{Name:ha-763049-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:42.398822   27518 status.go:255] checking status of ha-763049-m04 ...
	I0729 10:46:42.399131   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:42.399172   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:42.413907   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0729 10:46:42.414352   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:42.414877   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:42.414907   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:42.415222   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:42.415435   27518 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:46:42.417180   27518 status.go:330] ha-763049-m04 host status = "Running" (err=<nil>)
	I0729 10:46:42.417198   27518 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:46:42.417498   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:42.417546   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:42.433826   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0729 10:46:42.434320   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:42.434809   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:42.434852   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:42.435172   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:42.435371   27518 main.go:141] libmachine: (ha-763049-m04) Calling .GetIP
	I0729 10:46:42.438302   27518 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:42.438723   27518 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:46:42.438761   27518 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:42.438910   27518 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:46:42.439281   27518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:42.439320   27518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:42.455397   27518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
	I0729 10:46:42.455779   27518 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:42.456261   27518 main.go:141] libmachine: Using API Version  1
	I0729 10:46:42.456280   27518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:42.456624   27518 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:42.456817   27518 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:46:42.457019   27518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:42.457039   27518 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:46:42.459748   27518 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:42.460119   27518 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:46:42.460155   27518 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:42.460276   27518 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:46:42.460465   27518 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:46:42.460595   27518 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:46:42.460740   27518 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:46:42.542602   27518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:42.560638   27518 status.go:257] ha-763049-m04 status: &{Name:ha-763049-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr: exit status 3 (5.428968662s)

                                                
                                                
-- stdout --
	ha-763049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-763049-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:43.321425   27619 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:43.321657   27619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:43.321666   27619 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:43.321671   27619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:43.321855   27619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:46:43.322061   27619 out.go:298] Setting JSON to false
	I0729 10:46:43.322090   27619 mustload.go:65] Loading cluster: ha-763049
	I0729 10:46:43.322259   27619 notify.go:220] Checking for updates...
	I0729 10:46:43.322626   27619 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:46:43.322645   27619 status.go:255] checking status of ha-763049 ...
	I0729 10:46:43.323116   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:43.323168   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:43.338381   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I0729 10:46:43.338865   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:43.339502   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:43.339531   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:43.339844   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:43.340025   27619 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:46:43.341735   27619 status.go:330] ha-763049 host status = "Running" (err=<nil>)
	I0729 10:46:43.341752   27619 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:46:43.342167   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:43.342233   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:43.357095   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45393
	I0729 10:46:43.357474   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:43.357899   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:43.357945   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:43.358287   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:43.358465   27619 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:46:43.361359   27619 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:43.361741   27619 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:46:43.361771   27619 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:43.361866   27619 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:46:43.362197   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:43.362240   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:43.377208   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41673
	I0729 10:46:43.377625   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:43.378093   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:43.378113   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:43.378413   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:43.378627   27619 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:46:43.378826   27619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:43.378859   27619 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:46:43.381736   27619 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:43.382176   27619 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:46:43.382214   27619 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:43.382366   27619 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:46:43.382544   27619 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:46:43.382753   27619 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:46:43.382900   27619 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:46:43.463002   27619 ssh_runner.go:195] Run: systemctl --version
	I0729 10:46:43.470322   27619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:43.486912   27619 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:46:43.486939   27619 api_server.go:166] Checking apiserver status ...
	I0729 10:46:43.486972   27619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:46:43.502484   27619 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	W0729 10:46:43.513789   27619 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:46:43.513845   27619 ssh_runner.go:195] Run: ls
	I0729 10:46:43.519891   27619 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:46:43.524132   27619 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:46:43.524153   27619 status.go:422] ha-763049 apiserver status = Running (err=<nil>)
	I0729 10:46:43.524164   27619 status.go:257] ha-763049 status: &{Name:ha-763049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:43.524180   27619 status.go:255] checking status of ha-763049-m02 ...
	I0729 10:46:43.524462   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:43.524498   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:43.539577   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0729 10:46:43.539983   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:43.540401   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:43.540425   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:43.540741   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:43.540945   27619 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:46:43.542538   27619 status.go:330] ha-763049-m02 host status = "Running" (err=<nil>)
	I0729 10:46:43.542552   27619 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:46:43.542884   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:43.542923   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:43.558413   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33015
	I0729 10:46:43.558899   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:43.559435   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:43.559455   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:43.559752   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:43.559937   27619 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:46:43.562997   27619 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:43.563395   27619 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:46:43.563414   27619 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:43.563557   27619 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:46:43.563966   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:43.564017   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:43.578815   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0729 10:46:43.579267   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:43.579662   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:43.579680   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:43.579994   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:43.580184   27619 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:46:43.580372   27619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:43.580392   27619 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:46:43.583239   27619 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:43.583661   27619 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:46:43.583693   27619 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:43.583779   27619 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:46:43.583958   27619 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:46:43.584084   27619 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:46:43.584232   27619 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	W0729 10:46:45.267044   27619 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:45.267094   27619 retry.go:31] will retry after 236.726209ms: dial tcp 192.168.39.39:22: connect: no route to host
	W0729 10:46:48.343000   27619 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0729 10:46:48.343072   27619 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0729 10:46:48.343086   27619 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:48.343093   27619 status.go:257] ha-763049-m02 status: &{Name:ha-763049-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:46:48.343109   27619 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:48.343116   27619 status.go:255] checking status of ha-763049-m03 ...
	I0729 10:46:48.343428   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:48.343466   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:48.358970   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0729 10:46:48.359396   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:48.359868   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:48.359889   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:48.360237   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:48.360433   27619 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:46:48.362241   27619 status.go:330] ha-763049-m03 host status = "Running" (err=<nil>)
	I0729 10:46:48.362271   27619 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:46:48.362541   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:48.362585   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:48.377952   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42801
	I0729 10:46:48.378448   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:48.378957   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:48.378984   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:48.379253   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:48.379430   27619 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:46:48.382540   27619 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:48.383021   27619 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:46:48.383058   27619 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:48.383231   27619 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:46:48.383626   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:48.383670   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:48.399264   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41191
	I0729 10:46:48.399723   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:48.400225   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:48.400252   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:48.400547   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:48.400707   27619 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:46:48.400925   27619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:48.400965   27619 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:46:48.404039   27619 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:48.404506   27619 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:46:48.404536   27619 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:48.404783   27619 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:46:48.404943   27619 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:46:48.405106   27619 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:46:48.405349   27619 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:46:48.487224   27619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:48.505071   27619 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:46:48.505099   27619 api_server.go:166] Checking apiserver status ...
	I0729 10:46:48.505128   27619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:46:48.518679   27619 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup
	W0729 10:46:48.528094   27619 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:46:48.528170   27619 ssh_runner.go:195] Run: ls
	I0729 10:46:48.533022   27619 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:46:48.539665   27619 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:46:48.539692   27619 status.go:422] ha-763049-m03 apiserver status = Running (err=<nil>)
	I0729 10:46:48.539704   27619 status.go:257] ha-763049-m03 status: &{Name:ha-763049-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:48.539722   27619 status.go:255] checking status of ha-763049-m04 ...
	I0729 10:46:48.540118   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:48.540152   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:48.556465   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0729 10:46:48.556893   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:48.557423   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:48.557453   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:48.557752   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:48.557933   27619 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:46:48.559652   27619 status.go:330] ha-763049-m04 host status = "Running" (err=<nil>)
	I0729 10:46:48.559667   27619 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:46:48.559974   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:48.560016   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:48.575358   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0729 10:46:48.575813   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:48.576335   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:48.576357   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:48.576719   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:48.576911   27619 main.go:141] libmachine: (ha-763049-m04) Calling .GetIP
	I0729 10:46:48.580138   27619 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:48.580581   27619 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:46:48.580610   27619 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:48.580744   27619 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:46:48.581066   27619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:48.581100   27619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:48.596403   27619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37339
	I0729 10:46:48.596870   27619 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:48.597357   27619 main.go:141] libmachine: Using API Version  1
	I0729 10:46:48.597388   27619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:48.597686   27619 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:48.597878   27619 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:46:48.598112   27619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:48.598130   27619 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:46:48.601117   27619 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:48.601647   27619 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:46:48.601666   27619 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:48.601823   27619 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:46:48.601989   27619 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:46:48.602159   27619 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:46:48.602276   27619 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:46:48.686786   27619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:48.707500   27619 status.go:257] ha-763049-m04 status: &{Name:ha-763049-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr: exit status 3 (4.603653191s)

                                                
                                                
-- stdout --
	ha-763049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-763049-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:50.636982   27723 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:50.637100   27723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:50.637110   27723 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:50.637114   27723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:50.637324   27723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:46:50.637534   27723 out.go:298] Setting JSON to false
	I0729 10:46:50.637563   27723 mustload.go:65] Loading cluster: ha-763049
	I0729 10:46:50.637689   27723 notify.go:220] Checking for updates...
	I0729 10:46:50.638025   27723 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:46:50.638046   27723 status.go:255] checking status of ha-763049 ...
	I0729 10:46:50.638478   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:50.638537   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:50.654100   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44909
	I0729 10:46:50.654626   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:50.655291   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:50.655312   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:50.655661   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:50.655842   27723 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:46:50.657452   27723 status.go:330] ha-763049 host status = "Running" (err=<nil>)
	I0729 10:46:50.657468   27723 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:46:50.657771   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:50.657815   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:50.673054   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I0729 10:46:50.673485   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:50.673917   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:50.673938   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:50.674251   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:50.674520   27723 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:46:50.677304   27723 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:50.677684   27723 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:46:50.677717   27723 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:50.677905   27723 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:46:50.678221   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:50.678269   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:50.692868   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34523
	I0729 10:46:50.693264   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:50.693656   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:50.693672   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:50.693957   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:50.694163   27723 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:46:50.694386   27723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:50.694414   27723 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:46:50.697670   27723 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:50.698127   27723 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:46:50.698154   27723 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:50.698335   27723 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:46:50.698512   27723 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:46:50.698671   27723 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:46:50.698824   27723 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:46:50.779117   27723 ssh_runner.go:195] Run: systemctl --version
	I0729 10:46:50.785653   27723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:50.803686   27723 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:46:50.803717   27723 api_server.go:166] Checking apiserver status ...
	I0729 10:46:50.803760   27723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:46:50.819548   27723 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	W0729 10:46:50.829811   27723 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:46:50.829860   27723 ssh_runner.go:195] Run: ls
	I0729 10:46:50.834485   27723 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:46:50.840507   27723 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:46:50.840534   27723 status.go:422] ha-763049 apiserver status = Running (err=<nil>)
	I0729 10:46:50.840543   27723 status.go:257] ha-763049 status: &{Name:ha-763049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:50.840559   27723 status.go:255] checking status of ha-763049-m02 ...
	I0729 10:46:50.840907   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:50.840938   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:50.856663   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0729 10:46:50.857120   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:50.857593   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:50.857614   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:50.857949   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:50.858143   27723 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:46:50.859810   27723 status.go:330] ha-763049-m02 host status = "Running" (err=<nil>)
	I0729 10:46:50.859825   27723 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:46:50.860116   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:50.860149   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:50.876557   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I0729 10:46:50.876987   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:50.877415   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:50.877437   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:50.877791   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:50.877981   27723 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:46:50.881012   27723 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:50.881467   27723 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:46:50.881500   27723 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:50.881604   27723 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:46:50.881903   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:50.881947   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:50.898914   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0729 10:46:50.899413   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:50.899938   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:50.899958   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:50.900240   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:50.900441   27723 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:46:50.900629   27723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:50.900659   27723 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:46:50.903799   27723 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:50.904291   27723 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:46:50.904320   27723 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:50.904445   27723 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:46:50.904603   27723 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:46:50.904770   27723 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:46:50.904967   27723 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	W0729 10:46:51.410967   27723 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:51.411025   27723 retry.go:31] will retry after 369.407233ms: dial tcp 192.168.39.39:22: connect: no route to host
	W0729 10:46:54.834967   27723 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0729 10:46:54.835037   27723 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0729 10:46:54.835050   27723 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:54.835057   27723 status.go:257] ha-763049-m02 status: &{Name:ha-763049-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:46:54.835106   27723 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:54.835113   27723 status.go:255] checking status of ha-763049-m03 ...
	I0729 10:46:54.835400   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:54.835438   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:54.850790   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41221
	I0729 10:46:54.851351   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:54.851845   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:54.851868   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:54.852199   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:54.852383   27723 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:46:54.854155   27723 status.go:330] ha-763049-m03 host status = "Running" (err=<nil>)
	I0729 10:46:54.854171   27723 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:46:54.854502   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:54.854540   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:54.869631   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35663
	I0729 10:46:54.870090   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:54.870593   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:54.870612   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:54.870908   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:54.871106   27723 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:46:54.874103   27723 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:54.874588   27723 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:46:54.874615   27723 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:54.874896   27723 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:46:54.875271   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:54.875314   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:54.890568   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
	I0729 10:46:54.891070   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:54.891588   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:54.891614   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:54.891943   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:54.892164   27723 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:46:54.892355   27723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:54.892380   27723 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:46:54.895397   27723 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:54.895917   27723 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:46:54.895945   27723 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:46:54.896129   27723 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:46:54.896350   27723 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:46:54.896596   27723 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:46:54.896830   27723 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:46:54.983424   27723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:54.999710   27723 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:46:54.999737   27723 api_server.go:166] Checking apiserver status ...
	I0729 10:46:54.999769   27723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:46:55.014570   27723 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup
	W0729 10:46:55.025091   27723 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:46:55.025139   27723 ssh_runner.go:195] Run: ls
	I0729 10:46:55.030050   27723 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:46:55.038475   27723 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:46:55.038507   27723 status.go:422] ha-763049-m03 apiserver status = Running (err=<nil>)
	I0729 10:46:55.038518   27723 status.go:257] ha-763049-m03 status: &{Name:ha-763049-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:55.038537   27723 status.go:255] checking status of ha-763049-m04 ...
	I0729 10:46:55.038856   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:55.038919   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:55.054749   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35549
	I0729 10:46:55.055195   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:55.055690   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:55.055710   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:55.055952   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:55.056133   27723 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:46:55.057688   27723 status.go:330] ha-763049-m04 host status = "Running" (err=<nil>)
	I0729 10:46:55.057701   27723 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:46:55.058010   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:55.058043   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:55.073250   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0729 10:46:55.073655   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:55.074160   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:55.074183   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:55.074494   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:55.074689   27723 main.go:141] libmachine: (ha-763049-m04) Calling .GetIP
	I0729 10:46:55.077597   27723 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:55.078027   27723 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:46:55.078057   27723 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:55.078231   27723 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:46:55.078525   27723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:55.078572   27723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:55.093327   27723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38675
	I0729 10:46:55.093749   27723 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:55.094226   27723 main.go:141] libmachine: Using API Version  1
	I0729 10:46:55.094254   27723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:55.094530   27723 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:55.094695   27723 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:46:55.094911   27723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:55.094932   27723 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:46:55.097647   27723 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:55.098071   27723 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:46:55.098093   27723 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:46:55.098302   27723 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:46:55.098498   27723 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:46:55.098628   27723 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:46:55.098767   27723 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:46:55.182679   27723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:55.196780   27723 status.go:257] ha-763049-m04 status: &{Name:ha-763049-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr: exit status 3 (4.249814838s)

                                                
                                                
-- stdout --
	ha-763049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-763049-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:57.359011   27823 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:57.359126   27823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:57.359135   27823 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:57.359139   27823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:57.359339   27823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:46:57.359531   27823 out.go:298] Setting JSON to false
	I0729 10:46:57.359560   27823 mustload.go:65] Loading cluster: ha-763049
	I0729 10:46:57.359659   27823 notify.go:220] Checking for updates...
	I0729 10:46:57.360028   27823 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:46:57.360051   27823 status.go:255] checking status of ha-763049 ...
	I0729 10:46:57.360515   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:57.360560   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:57.378311   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44847
	I0729 10:46:57.378799   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:57.379348   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:46:57.379368   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:57.379705   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:57.379932   27823 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:46:57.381465   27823 status.go:330] ha-763049 host status = "Running" (err=<nil>)
	I0729 10:46:57.381479   27823 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:46:57.381810   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:57.381849   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:57.396447   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40169
	I0729 10:46:57.396789   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:57.397266   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:46:57.397293   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:57.397633   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:57.397810   27823 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:46:57.400625   27823 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:57.400985   27823 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:46:57.401019   27823 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:57.401123   27823 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:46:57.401501   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:57.401539   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:57.415893   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34975
	I0729 10:46:57.416240   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:57.416679   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:46:57.416703   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:57.417073   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:57.417285   27823 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:46:57.417473   27823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:57.417513   27823 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:46:57.420574   27823 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:57.421005   27823 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:46:57.421039   27823 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:46:57.421103   27823 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:46:57.421293   27823 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:46:57.421446   27823 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:46:57.421560   27823 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:46:57.502354   27823 ssh_runner.go:195] Run: systemctl --version
	I0729 10:46:57.508245   27823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:46:57.526871   27823 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:46:57.526906   27823 api_server.go:166] Checking apiserver status ...
	I0729 10:46:57.526948   27823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:46:57.542720   27823 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	W0729 10:46:57.553440   27823 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:46:57.553515   27823 ssh_runner.go:195] Run: ls
	I0729 10:46:57.558229   27823 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:46:57.563444   27823 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:46:57.563467   27823 status.go:422] ha-763049 apiserver status = Running (err=<nil>)
	I0729 10:46:57.563479   27823 status.go:257] ha-763049 status: &{Name:ha-763049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:57.563514   27823 status.go:255] checking status of ha-763049-m02 ...
	I0729 10:46:57.563912   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:57.563954   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:57.579328   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I0729 10:46:57.579698   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:57.580145   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:46:57.580192   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:57.580512   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:57.580727   27823 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:46:57.582431   27823 status.go:330] ha-763049-m02 host status = "Running" (err=<nil>)
	I0729 10:46:57.582446   27823 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:46:57.582802   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:57.582842   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:57.597317   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0729 10:46:57.597757   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:57.598163   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:46:57.598189   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:57.598523   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:57.598718   27823 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:46:57.601365   27823 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:57.601768   27823 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:46:57.601795   27823 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:57.601897   27823 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:46:57.602283   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:57.602326   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:57.617877   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35437
	I0729 10:46:57.618293   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:57.618833   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:46:57.618853   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:57.619183   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:57.619364   27823 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:46:57.619525   27823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:46:57.619546   27823 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:46:57.622619   27823 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:57.623120   27823 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:46:57.623153   27823 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:46:57.623380   27823 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:46:57.623536   27823 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:46:57.623670   27823 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:46:57.623774   27823 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	W0729 10:46:57.906974   27823 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:46:57.907031   27823 retry.go:31] will retry after 219.503585ms: dial tcp 192.168.39.39:22: connect: no route to host
	W0729 10:47:01.202936   27823 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0729 10:47:01.203021   27823 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0729 10:47:01.203037   27823 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:47:01.203048   27823 status.go:257] ha-763049-m02 status: &{Name:ha-763049-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:47:01.203076   27823 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:47:01.203090   27823 status.go:255] checking status of ha-763049-m03 ...
	I0729 10:47:01.203403   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:01.203443   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:01.220657   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36051
	I0729 10:47:01.221168   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:01.221650   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:47:01.221679   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:01.221977   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:01.222149   27823 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:47:01.223742   27823 status.go:330] ha-763049-m03 host status = "Running" (err=<nil>)
	I0729 10:47:01.223756   27823 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:47:01.224043   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:01.224100   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:01.240045   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0729 10:47:01.240430   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:01.240940   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:47:01.240966   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:01.241288   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:01.241499   27823 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:47:01.244880   27823 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:01.245399   27823 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:01.245427   27823 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:01.245560   27823 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:47:01.245845   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:01.245877   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:01.260788   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43619
	I0729 10:47:01.261215   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:01.261694   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:47:01.261719   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:01.262089   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:01.262297   27823 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:47:01.262529   27823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:01.262554   27823 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:47:01.265385   27823 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:01.265822   27823 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:01.265862   27823 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:01.266009   27823 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:47:01.266177   27823 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:47:01.266327   27823 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:47:01.266469   27823 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:47:01.351412   27823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:01.368983   27823 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:47:01.369006   27823 api_server.go:166] Checking apiserver status ...
	I0729 10:47:01.369035   27823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:47:01.384332   27823 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup
	W0729 10:47:01.394806   27823 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:47:01.394866   27823 ssh_runner.go:195] Run: ls
	I0729 10:47:01.399661   27823 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:47:01.406283   27823 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:47:01.406324   27823 status.go:422] ha-763049-m03 apiserver status = Running (err=<nil>)
	I0729 10:47:01.406336   27823 status.go:257] ha-763049-m03 status: &{Name:ha-763049-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:01.406356   27823 status.go:255] checking status of ha-763049-m04 ...
	I0729 10:47:01.406648   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:01.406690   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:01.421920   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44189
	I0729 10:47:01.422395   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:01.422901   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:47:01.422923   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:01.423196   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:01.423393   27823 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:47:01.424959   27823 status.go:330] ha-763049-m04 host status = "Running" (err=<nil>)
	I0729 10:47:01.424976   27823 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:47:01.425382   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:01.425424   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:01.440604   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43981
	I0729 10:47:01.441060   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:01.441523   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:47:01.441543   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:01.441840   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:01.441995   27823 main.go:141] libmachine: (ha-763049-m04) Calling .GetIP
	I0729 10:47:01.444609   27823 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:01.445044   27823 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:01.445086   27823 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:01.445243   27823 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:47:01.445555   27823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:01.445597   27823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:01.460122   27823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I0729 10:47:01.460567   27823 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:01.461008   27823 main.go:141] libmachine: Using API Version  1
	I0729 10:47:01.461030   27823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:01.461309   27823 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:01.461468   27823 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:47:01.461637   27823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:01.461654   27823 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:47:01.464346   27823 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:01.464747   27823 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:01.464777   27823 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:01.464882   27823 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:47:01.465055   27823 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:47:01.465220   27823 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:47:01.465368   27823 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:47:01.550420   27823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:01.566376   27823 status.go:257] ha-763049-m04 status: &{Name:ha-763049-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr: exit status 3 (3.747559539s)

                                                
                                                
-- stdout --
	ha-763049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-763049-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:06.495535   27939 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:06.495649   27939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:06.495660   27939 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:06.495667   27939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:06.495835   27939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:47:06.496051   27939 out.go:298] Setting JSON to false
	I0729 10:47:06.496082   27939 mustload.go:65] Loading cluster: ha-763049
	I0729 10:47:06.496152   27939 notify.go:220] Checking for updates...
	I0729 10:47:06.496488   27939 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:47:06.496504   27939 status.go:255] checking status of ha-763049 ...
	I0729 10:47:06.496900   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:06.496962   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:06.516940   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0729 10:47:06.517441   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:06.518032   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:06.518059   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:06.518400   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:06.518615   27939 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:47:06.520093   27939 status.go:330] ha-763049 host status = "Running" (err=<nil>)
	I0729 10:47:06.520106   27939 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:47:06.520390   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:06.520430   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:06.535424   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42175
	I0729 10:47:06.535912   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:06.536468   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:06.536487   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:06.536764   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:06.536945   27939 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:47:06.539678   27939 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:06.540133   27939 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:47:06.540166   27939 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:06.540281   27939 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:47:06.540663   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:06.540710   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:06.555902   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35605
	I0729 10:47:06.556272   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:06.556775   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:06.556805   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:06.557116   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:06.557316   27939 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:47:06.557518   27939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:06.557562   27939 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:47:06.560604   27939 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:06.561135   27939 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:47:06.561170   27939 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:06.561285   27939 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:47:06.561446   27939 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:47:06.561620   27939 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:47:06.561764   27939 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:47:06.646985   27939 ssh_runner.go:195] Run: systemctl --version
	I0729 10:47:06.654684   27939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:06.670745   27939 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:47:06.670778   27939 api_server.go:166] Checking apiserver status ...
	I0729 10:47:06.670830   27939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:47:06.686390   27939 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	W0729 10:47:06.697902   27939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:47:06.697953   27939 ssh_runner.go:195] Run: ls
	I0729 10:47:06.702561   27939 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:47:06.708580   27939 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:47:06.708601   27939 status.go:422] ha-763049 apiserver status = Running (err=<nil>)
	I0729 10:47:06.708610   27939 status.go:257] ha-763049 status: &{Name:ha-763049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:06.708627   27939 status.go:255] checking status of ha-763049-m02 ...
	I0729 10:47:06.708929   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:06.708968   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:06.724610   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0729 10:47:06.725090   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:06.725571   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:06.725594   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:06.725897   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:06.726087   27939 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:47:06.727666   27939 status.go:330] ha-763049-m02 host status = "Running" (err=<nil>)
	I0729 10:47:06.727680   27939 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:47:06.727965   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:06.727997   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:06.744335   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39361
	I0729 10:47:06.744767   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:06.745282   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:06.745315   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:06.745721   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:06.745986   27939 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:47:06.749135   27939 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:47:06.749614   27939 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:47:06.749646   27939 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:47:06.749795   27939 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:47:06.750232   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:06.750289   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:06.765379   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42049
	I0729 10:47:06.765816   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:06.766305   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:06.766333   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:06.766593   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:06.766791   27939 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:47:06.766945   27939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:06.766962   27939 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:47:06.770006   27939 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:47:06.770381   27939 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:47:06.770412   27939 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:47:06.770532   27939 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:47:06.770742   27939 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:47:06.770895   27939 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:47:06.771049   27939 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	W0729 10:47:09.842970   27939 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0729 10:47:09.843083   27939 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0729 10:47:09.843103   27939 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:47:09.843113   27939 status.go:257] ha-763049-m02 status: &{Name:ha-763049-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:47:09.843137   27939 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0729 10:47:09.843157   27939 status.go:255] checking status of ha-763049-m03 ...
	I0729 10:47:09.843510   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:09.843554   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:09.858383   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35551
	I0729 10:47:09.858862   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:09.859410   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:09.859430   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:09.859718   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:09.859917   27939 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:47:09.861586   27939 status.go:330] ha-763049-m03 host status = "Running" (err=<nil>)
	I0729 10:47:09.861600   27939 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:47:09.861896   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:09.861941   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:09.877332   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45611
	I0729 10:47:09.877746   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:09.878228   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:09.878252   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:09.878648   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:09.878867   27939 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:47:09.882178   27939 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:09.882677   27939 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:09.882725   27939 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:09.882935   27939 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:47:09.883368   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:09.883429   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:09.898814   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I0729 10:47:09.899440   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:09.900017   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:09.900035   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:09.900395   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:09.900601   27939 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:47:09.900777   27939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:09.900798   27939 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:47:09.904555   27939 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:09.905004   27939 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:09.905035   27939 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:09.905230   27939 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:47:09.905409   27939 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:47:09.905526   27939 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:47:09.905688   27939 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:47:09.986952   27939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:10.004270   27939 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:47:10.004294   27939 api_server.go:166] Checking apiserver status ...
	I0729 10:47:10.004328   27939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:47:10.019046   27939 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup
	W0729 10:47:10.030297   27939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:47:10.030348   27939 ssh_runner.go:195] Run: ls
	I0729 10:47:10.035122   27939 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:47:10.039863   27939 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:47:10.039891   27939 status.go:422] ha-763049-m03 apiserver status = Running (err=<nil>)
	I0729 10:47:10.039900   27939 status.go:257] ha-763049-m03 status: &{Name:ha-763049-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:10.039915   27939 status.go:255] checking status of ha-763049-m04 ...
	I0729 10:47:10.040264   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:10.040311   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:10.055663   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I0729 10:47:10.056171   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:10.056658   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:10.056682   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:10.056990   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:10.057198   27939 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:47:10.058858   27939 status.go:330] ha-763049-m04 host status = "Running" (err=<nil>)
	I0729 10:47:10.058877   27939 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:47:10.059166   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:10.059213   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:10.074187   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
	I0729 10:47:10.074610   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:10.075108   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:10.075134   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:10.075475   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:10.075658   27939 main.go:141] libmachine: (ha-763049-m04) Calling .GetIP
	I0729 10:47:10.078620   27939 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:10.079059   27939 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:10.079102   27939 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:10.079219   27939 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:47:10.079617   27939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:10.079664   27939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:10.094397   27939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
	I0729 10:47:10.094877   27939 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:10.095397   27939 main.go:141] libmachine: Using API Version  1
	I0729 10:47:10.095416   27939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:10.095827   27939 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:10.095989   27939 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:47:10.096157   27939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:10.096176   27939 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:47:10.099012   27939 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:10.099489   27939 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:10.099535   27939 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:10.099671   27939 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:47:10.099902   27939 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:47:10.100067   27939 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:47:10.100233   27939 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:47:10.186959   27939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:10.202472   27939 status.go:257] ha-763049-m04 status: &{Name:ha-763049-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr: exit status 7 (624.884977ms)

                                                
                                                
-- stdout --
	ha-763049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-763049-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:17.081334   28072 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:17.081607   28072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:17.081617   28072 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:17.081621   28072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:17.081845   28072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:47:17.082023   28072 out.go:298] Setting JSON to false
	I0729 10:47:17.082048   28072 mustload.go:65] Loading cluster: ha-763049
	I0729 10:47:17.082106   28072 notify.go:220] Checking for updates...
	I0729 10:47:17.082568   28072 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:47:17.082589   28072 status.go:255] checking status of ha-763049 ...
	I0729 10:47:17.083036   28072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:17.083098   28072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:17.101779   28072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0729 10:47:17.102226   28072 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:17.103054   28072 main.go:141] libmachine: Using API Version  1
	I0729 10:47:17.103119   28072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:17.103476   28072 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:17.103676   28072 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:47:17.105508   28072 status.go:330] ha-763049 host status = "Running" (err=<nil>)
	I0729 10:47:17.105527   28072 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:47:17.105804   28072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:17.105837   28072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:17.120686   28072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0729 10:47:17.121119   28072 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:17.121559   28072 main.go:141] libmachine: Using API Version  1
	I0729 10:47:17.121579   28072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:17.121855   28072 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:17.122079   28072 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:47:17.124931   28072 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:17.125313   28072 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:47:17.125344   28072 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:17.125469   28072 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:47:17.125853   28072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:17.125910   28072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:17.141617   28072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
	I0729 10:47:17.142138   28072 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:17.142680   28072 main.go:141] libmachine: Using API Version  1
	I0729 10:47:17.142727   28072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:17.143065   28072 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:17.143282   28072 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:47:17.143504   28072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:17.143528   28072 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:47:17.146591   28072 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:17.147063   28072 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:47:17.147099   28072 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:17.147373   28072 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:47:17.147547   28072 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:47:17.147680   28072 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:47:17.147901   28072 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:47:17.231579   28072 ssh_runner.go:195] Run: systemctl --version
	I0729 10:47:17.237866   28072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:17.258055   28072 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:47:17.258082   28072 api_server.go:166] Checking apiserver status ...
	I0729 10:47:17.258113   28072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:47:17.275169   28072 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	W0729 10:47:17.287243   28072 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:47:17.287294   28072 ssh_runner.go:195] Run: ls
	I0729 10:47:17.294089   28072 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:47:17.298128   28072 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:47:17.298155   28072 status.go:422] ha-763049 apiserver status = Running (err=<nil>)
	I0729 10:47:17.298177   28072 status.go:257] ha-763049 status: &{Name:ha-763049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:17.298200   28072 status.go:255] checking status of ha-763049-m02 ...
	I0729 10:47:17.298507   28072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:17.298546   28072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:17.313211   28072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34639
	I0729 10:47:17.313670   28072 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:17.314175   28072 main.go:141] libmachine: Using API Version  1
	I0729 10:47:17.314195   28072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:17.314507   28072 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:17.314723   28072 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:47:17.316087   28072 status.go:330] ha-763049-m02 host status = "Stopped" (err=<nil>)
	I0729 10:47:17.316098   28072 status.go:343] host is not running, skipping remaining checks
	I0729 10:47:17.316104   28072 status.go:257] ha-763049-m02 status: &{Name:ha-763049-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:17.316118   28072 status.go:255] checking status of ha-763049-m03 ...
	I0729 10:47:17.316404   28072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:17.316437   28072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:17.331069   28072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0729 10:47:17.331440   28072 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:17.331885   28072 main.go:141] libmachine: Using API Version  1
	I0729 10:47:17.331905   28072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:17.332163   28072 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:17.332363   28072 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:47:17.333865   28072 status.go:330] ha-763049-m03 host status = "Running" (err=<nil>)
	I0729 10:47:17.333878   28072 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:47:17.334167   28072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:17.334203   28072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:17.348984   28072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0729 10:47:17.349394   28072 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:17.349843   28072 main.go:141] libmachine: Using API Version  1
	I0729 10:47:17.349861   28072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:17.350187   28072 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:17.350399   28072 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:47:17.353467   28072 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:17.353973   28072 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:17.353997   28072 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:17.354205   28072 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:47:17.354612   28072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:17.354660   28072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:17.369759   28072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45655
	I0729 10:47:17.370176   28072 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:17.370631   28072 main.go:141] libmachine: Using API Version  1
	I0729 10:47:17.370650   28072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:17.370960   28072 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:17.371119   28072 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:47:17.371314   28072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:17.371334   28072 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:47:17.374216   28072 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:17.374573   28072 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:17.374592   28072 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:17.374803   28072 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:47:17.374983   28072 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:47:17.375121   28072 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:47:17.375254   28072 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:47:17.455745   28072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:17.471795   28072 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:47:17.471823   28072 api_server.go:166] Checking apiserver status ...
	I0729 10:47:17.471855   28072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:47:17.487616   28072 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup
	W0729 10:47:17.498257   28072 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:47:17.498329   28072 ssh_runner.go:195] Run: ls
	I0729 10:47:17.503776   28072 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:47:17.507941   28072 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:47:17.507966   28072 status.go:422] ha-763049-m03 apiserver status = Running (err=<nil>)
	I0729 10:47:17.507976   28072 status.go:257] ha-763049-m03 status: &{Name:ha-763049-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:17.507991   28072 status.go:255] checking status of ha-763049-m04 ...
	I0729 10:47:17.508268   28072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:17.508304   28072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:17.522820   28072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42761
	I0729 10:47:17.523191   28072 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:17.523694   28072 main.go:141] libmachine: Using API Version  1
	I0729 10:47:17.523714   28072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:17.524042   28072 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:17.524233   28072 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:47:17.525967   28072 status.go:330] ha-763049-m04 host status = "Running" (err=<nil>)
	I0729 10:47:17.525983   28072 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:47:17.526379   28072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:17.526418   28072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:17.540966   28072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0729 10:47:17.541329   28072 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:17.541848   28072 main.go:141] libmachine: Using API Version  1
	I0729 10:47:17.541873   28072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:17.542156   28072 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:17.542344   28072 main.go:141] libmachine: (ha-763049-m04) Calling .GetIP
	I0729 10:47:17.545396   28072 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:17.545819   28072 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:17.545843   28072 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:17.546004   28072 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:47:17.546392   28072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:17.546433   28072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:17.561846   28072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40051
	I0729 10:47:17.562211   28072 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:17.562718   28072 main.go:141] libmachine: Using API Version  1
	I0729 10:47:17.562744   28072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:17.563075   28072 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:17.563247   28072 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:47:17.563408   28072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:17.563429   28072 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:47:17.566390   28072 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:17.566887   28072 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:17.566915   28072 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:17.567042   28072 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:47:17.567216   28072 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:47:17.567354   28072 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:47:17.567478   28072 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:47:17.650414   28072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:17.664996   28072 status.go:257] ha-763049-m04 status: &{Name:ha-763049-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr: exit status 7 (616.06248ms)

                                                
                                                
-- stdout --
	ha-763049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-763049-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:21.706532   28161 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:21.706837   28161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:21.706847   28161 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:21.706851   28161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:21.707020   28161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:47:21.707197   28161 out.go:298] Setting JSON to false
	I0729 10:47:21.707225   28161 mustload.go:65] Loading cluster: ha-763049
	I0729 10:47:21.707351   28161 notify.go:220] Checking for updates...
	I0729 10:47:21.707713   28161 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:47:21.707734   28161 status.go:255] checking status of ha-763049 ...
	I0729 10:47:21.708203   28161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:21.708268   28161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:21.727011   28161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
	I0729 10:47:21.727524   28161 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:21.728255   28161 main.go:141] libmachine: Using API Version  1
	I0729 10:47:21.728286   28161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:21.728617   28161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:21.728852   28161 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:47:21.730514   28161 status.go:330] ha-763049 host status = "Running" (err=<nil>)
	I0729 10:47:21.730528   28161 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:47:21.730835   28161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:21.730869   28161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:21.745187   28161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46543
	I0729 10:47:21.745535   28161 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:21.745991   28161 main.go:141] libmachine: Using API Version  1
	I0729 10:47:21.746025   28161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:21.746382   28161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:21.746585   28161 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:47:21.749413   28161 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:21.749903   28161 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:47:21.749921   28161 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:21.750046   28161 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:47:21.750334   28161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:21.750383   28161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:21.765277   28161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0729 10:47:21.765658   28161 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:21.766121   28161 main.go:141] libmachine: Using API Version  1
	I0729 10:47:21.766154   28161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:21.766457   28161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:21.766657   28161 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:47:21.766858   28161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:21.766886   28161 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:47:21.769612   28161 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:21.770008   28161 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:47:21.770028   28161 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:21.770181   28161 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:47:21.770347   28161 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:47:21.770496   28161 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:47:21.770598   28161 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:47:21.851108   28161 ssh_runner.go:195] Run: systemctl --version
	I0729 10:47:21.858218   28161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:21.873263   28161 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:47:21.873290   28161 api_server.go:166] Checking apiserver status ...
	I0729 10:47:21.873326   28161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:47:21.891828   28161 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	W0729 10:47:21.901865   28161 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:47:21.901917   28161 ssh_runner.go:195] Run: ls
	I0729 10:47:21.907389   28161 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:47:21.913169   28161 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:47:21.913192   28161 status.go:422] ha-763049 apiserver status = Running (err=<nil>)
	I0729 10:47:21.913202   28161 status.go:257] ha-763049 status: &{Name:ha-763049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:21.913218   28161 status.go:255] checking status of ha-763049-m02 ...
	I0729 10:47:21.913593   28161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:21.913632   28161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:21.928356   28161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39969
	I0729 10:47:21.928952   28161 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:21.929492   28161 main.go:141] libmachine: Using API Version  1
	I0729 10:47:21.929515   28161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:21.929861   28161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:21.930068   28161 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:47:21.931510   28161 status.go:330] ha-763049-m02 host status = "Stopped" (err=<nil>)
	I0729 10:47:21.931528   28161 status.go:343] host is not running, skipping remaining checks
	I0729 10:47:21.931539   28161 status.go:257] ha-763049-m02 status: &{Name:ha-763049-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:21.931560   28161 status.go:255] checking status of ha-763049-m03 ...
	I0729 10:47:21.931867   28161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:21.931902   28161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:21.946394   28161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36949
	I0729 10:47:21.946829   28161 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:21.947353   28161 main.go:141] libmachine: Using API Version  1
	I0729 10:47:21.947377   28161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:21.947729   28161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:21.947933   28161 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:47:21.949700   28161 status.go:330] ha-763049-m03 host status = "Running" (err=<nil>)
	I0729 10:47:21.949719   28161 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:47:21.950061   28161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:21.950101   28161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:21.964960   28161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0729 10:47:21.965406   28161 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:21.965900   28161 main.go:141] libmachine: Using API Version  1
	I0729 10:47:21.965922   28161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:21.966222   28161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:21.966401   28161 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:47:21.969026   28161 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:21.969452   28161 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:21.969487   28161 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:21.969623   28161 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:47:21.970007   28161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:21.970047   28161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:21.985519   28161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I0729 10:47:21.985965   28161 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:21.986443   28161 main.go:141] libmachine: Using API Version  1
	I0729 10:47:21.986462   28161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:21.986785   28161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:21.986981   28161 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:47:21.987208   28161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:21.987224   28161 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:47:21.990065   28161 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:21.990459   28161 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:21.990487   28161 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:21.990573   28161 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:47:21.990777   28161 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:47:21.990928   28161 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:47:21.991086   28161 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:47:22.074597   28161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:22.090632   28161 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:47:22.090659   28161 api_server.go:166] Checking apiserver status ...
	I0729 10:47:22.090689   28161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:47:22.104558   28161 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup
	W0729 10:47:22.114837   28161 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:47:22.114887   28161 ssh_runner.go:195] Run: ls
	I0729 10:47:22.119176   28161 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:47:22.123458   28161 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:47:22.123478   28161 status.go:422] ha-763049-m03 apiserver status = Running (err=<nil>)
	I0729 10:47:22.123486   28161 status.go:257] ha-763049-m03 status: &{Name:ha-763049-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:22.123500   28161 status.go:255] checking status of ha-763049-m04 ...
	I0729 10:47:22.123789   28161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:22.123819   28161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:22.138543   28161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0729 10:47:22.138978   28161 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:22.139463   28161 main.go:141] libmachine: Using API Version  1
	I0729 10:47:22.139486   28161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:22.139770   28161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:22.139947   28161 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:47:22.141513   28161 status.go:330] ha-763049-m04 host status = "Running" (err=<nil>)
	I0729 10:47:22.141528   28161 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:47:22.141805   28161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:22.141837   28161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:22.156645   28161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38359
	I0729 10:47:22.157109   28161 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:22.157573   28161 main.go:141] libmachine: Using API Version  1
	I0729 10:47:22.157595   28161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:22.157868   28161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:22.158087   28161 main.go:141] libmachine: (ha-763049-m04) Calling .GetIP
	I0729 10:47:22.160912   28161 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:22.161344   28161 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:22.161370   28161 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:22.161441   28161 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:47:22.161737   28161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:22.161769   28161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:22.176485   28161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45799
	I0729 10:47:22.176989   28161 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:22.177456   28161 main.go:141] libmachine: Using API Version  1
	I0729 10:47:22.177476   28161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:22.177730   28161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:22.177907   28161 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:47:22.178104   28161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:22.178125   28161 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:47:22.180841   28161 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:22.181215   28161 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:22.181242   28161 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:22.181346   28161 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:47:22.181513   28161 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:47:22.181673   28161 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:47:22.181792   28161 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:47:22.265985   28161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:22.281301   28161 status.go:257] ha-763049-m04 status: &{Name:ha-763049-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr: exit status 7 (630.485101ms)

                                                
                                                
-- stdout --
	ha-763049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-763049-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:38.033517   28281 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:38.033727   28281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:38.033738   28281 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:38.033744   28281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:38.033976   28281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:47:38.034135   28281 out.go:298] Setting JSON to false
	I0729 10:47:38.034158   28281 mustload.go:65] Loading cluster: ha-763049
	I0729 10:47:38.034206   28281 notify.go:220] Checking for updates...
	I0729 10:47:38.034506   28281 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:47:38.034519   28281 status.go:255] checking status of ha-763049 ...
	I0729 10:47:38.034937   28281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:38.034974   28281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:38.054781   28281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I0729 10:47:38.055281   28281 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:38.055869   28281 main.go:141] libmachine: Using API Version  1
	I0729 10:47:38.055896   28281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:38.056369   28281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:38.056621   28281 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:47:38.058519   28281 status.go:330] ha-763049 host status = "Running" (err=<nil>)
	I0729 10:47:38.058538   28281 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:47:38.058914   28281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:38.058959   28281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:38.074794   28281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36107
	I0729 10:47:38.075189   28281 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:38.075654   28281 main.go:141] libmachine: Using API Version  1
	I0729 10:47:38.075674   28281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:38.075968   28281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:38.076176   28281 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:47:38.078669   28281 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:38.079038   28281 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:47:38.079064   28281 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:38.079182   28281 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:47:38.079485   28281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:38.079526   28281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:38.093881   28281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35997
	I0729 10:47:38.094258   28281 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:38.094835   28281 main.go:141] libmachine: Using API Version  1
	I0729 10:47:38.094863   28281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:38.095181   28281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:38.095396   28281 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:47:38.095589   28281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:38.095623   28281 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:47:38.098419   28281 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:38.098896   28281 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:47:38.098921   28281 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:47:38.099064   28281 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:47:38.099260   28281 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:47:38.099423   28281 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:47:38.099575   28281 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:47:38.183814   28281 ssh_runner.go:195] Run: systemctl --version
	I0729 10:47:38.190621   28281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:38.209378   28281 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:47:38.209409   28281 api_server.go:166] Checking apiserver status ...
	I0729 10:47:38.209442   28281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:47:38.224815   28281 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	W0729 10:47:38.235305   28281 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:47:38.235362   28281 ssh_runner.go:195] Run: ls
	I0729 10:47:38.240358   28281 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:47:38.244777   28281 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:47:38.244799   28281 status.go:422] ha-763049 apiserver status = Running (err=<nil>)
	I0729 10:47:38.244808   28281 status.go:257] ha-763049 status: &{Name:ha-763049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:38.244826   28281 status.go:255] checking status of ha-763049-m02 ...
	I0729 10:47:38.245176   28281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:38.245226   28281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:38.259644   28281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
	I0729 10:47:38.260059   28281 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:38.260584   28281 main.go:141] libmachine: Using API Version  1
	I0729 10:47:38.260606   28281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:38.260949   28281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:38.261139   28281 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:47:38.262715   28281 status.go:330] ha-763049-m02 host status = "Stopped" (err=<nil>)
	I0729 10:47:38.262726   28281 status.go:343] host is not running, skipping remaining checks
	I0729 10:47:38.262743   28281 status.go:257] ha-763049-m02 status: &{Name:ha-763049-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:38.262757   28281 status.go:255] checking status of ha-763049-m03 ...
	I0729 10:47:38.263033   28281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:38.263068   28281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:38.277872   28281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I0729 10:47:38.278437   28281 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:38.278940   28281 main.go:141] libmachine: Using API Version  1
	I0729 10:47:38.278958   28281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:38.279218   28281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:38.279396   28281 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:47:38.281049   28281 status.go:330] ha-763049-m03 host status = "Running" (err=<nil>)
	I0729 10:47:38.281066   28281 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:47:38.281463   28281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:38.281525   28281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:38.297223   28281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0729 10:47:38.297694   28281 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:38.298109   28281 main.go:141] libmachine: Using API Version  1
	I0729 10:47:38.298135   28281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:38.298418   28281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:38.298595   28281 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:47:38.301610   28281 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:38.302131   28281 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:38.302171   28281 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:38.302345   28281 host.go:66] Checking if "ha-763049-m03" exists ...
	I0729 10:47:38.302674   28281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:38.302743   28281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:38.317237   28281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36063
	I0729 10:47:38.317695   28281 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:38.318171   28281 main.go:141] libmachine: Using API Version  1
	I0729 10:47:38.318199   28281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:38.318491   28281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:38.318708   28281 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:47:38.318932   28281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:38.318954   28281 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:47:38.321956   28281 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:38.322416   28281 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:38.322447   28281 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:38.322580   28281 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:47:38.322771   28281 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:47:38.322946   28281 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:47:38.323075   28281 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:47:38.407326   28281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:38.423879   28281 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:47:38.423919   28281 api_server.go:166] Checking apiserver status ...
	I0729 10:47:38.423958   28281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:47:38.438771   28281 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup
	W0729 10:47:38.448263   28281 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:47:38.448319   28281 ssh_runner.go:195] Run: ls
	I0729 10:47:38.453372   28281 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:47:38.458072   28281 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:47:38.458103   28281 status.go:422] ha-763049-m03 apiserver status = Running (err=<nil>)
	I0729 10:47:38.458121   28281 status.go:257] ha-763049-m03 status: &{Name:ha-763049-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:47:38.458140   28281 status.go:255] checking status of ha-763049-m04 ...
	I0729 10:47:38.458479   28281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:38.458511   28281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:38.474740   28281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0729 10:47:38.475138   28281 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:38.475625   28281 main.go:141] libmachine: Using API Version  1
	I0729 10:47:38.475646   28281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:38.475984   28281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:38.476174   28281 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:47:38.477798   28281 status.go:330] ha-763049-m04 host status = "Running" (err=<nil>)
	I0729 10:47:38.477814   28281 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:47:38.478133   28281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:38.478169   28281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:38.492744   28281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0729 10:47:38.493107   28281 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:38.493552   28281 main.go:141] libmachine: Using API Version  1
	I0729 10:47:38.493572   28281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:38.493841   28281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:38.494072   28281 main.go:141] libmachine: (ha-763049-m04) Calling .GetIP
	I0729 10:47:38.496745   28281 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:38.497136   28281 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:38.497176   28281 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:38.497336   28281 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:47:38.497632   28281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:38.497674   28281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:38.513015   28281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0729 10:47:38.513458   28281 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:38.513956   28281 main.go:141] libmachine: Using API Version  1
	I0729 10:47:38.513978   28281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:38.514304   28281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:38.514471   28281 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:47:38.514660   28281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:47:38.514681   28281 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:47:38.517809   28281 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:38.518237   28281 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:38.518293   28281 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:38.518391   28281 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:47:38.518592   28281 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:47:38.518779   28281 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:47:38.518929   28281 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:47:38.606795   28281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:47:38.622112   28281 status.go:257] ha-763049-m04 status: &{Name:ha-763049-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-763049 -n ha-763049
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-763049 logs -n 25: (1.499468865s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049:/home/docker/cp-test_ha-763049-m03_ha-763049.txt                      |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049 sudo cat                                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m03_ha-763049.txt                                |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m02:/home/docker/cp-test_ha-763049-m03_ha-763049-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m02 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m03_ha-763049-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04:/home/docker/cp-test_ha-763049-m03_ha-763049-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m04 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m03_ha-763049-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp testdata/cp-test.txt                                               | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile250926512/001/cp-test_ha-763049-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049:/home/docker/cp-test_ha-763049-m04_ha-763049.txt                      |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049 sudo cat                                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049.txt                                |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m02:/home/docker/cp-test_ha-763049-m04_ha-763049-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m02 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03:/home/docker/cp-test_ha-763049-m04_ha-763049-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m03 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-763049 node stop m02 -v=7                                                    | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-763049 node start m02 -v=7                                                   | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:38:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:38:52.077459   22547 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:38:52.077714   22547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:38:52.077722   22547 out.go:304] Setting ErrFile to fd 2...
	I0729 10:38:52.077726   22547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:38:52.077902   22547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:38:52.078455   22547 out.go:298] Setting JSON to false
	I0729 10:38:52.079272   22547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1278,"bootTime":1722248254,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:38:52.079333   22547 start.go:139] virtualization: kvm guest
	I0729 10:38:52.081563   22547 out.go:177] * [ha-763049] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:38:52.082960   22547 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:38:52.083017   22547 notify.go:220] Checking for updates...
	I0729 10:38:52.085331   22547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:38:52.086636   22547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:38:52.087857   22547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:38:52.089105   22547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 10:38:52.090271   22547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:38:52.091526   22547 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:38:52.125699   22547 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 10:38:52.126898   22547 start.go:297] selected driver: kvm2
	I0729 10:38:52.126910   22547 start.go:901] validating driver "kvm2" against <nil>
	I0729 10:38:52.126921   22547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:38:52.127617   22547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:38:52.127697   22547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:38:52.142364   22547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:38:52.142428   22547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:38:52.142632   22547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:38:52.142722   22547 cni.go:84] Creating CNI manager for ""
	I0729 10:38:52.142737   22547 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 10:38:52.142744   22547 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:38:52.142814   22547 start.go:340] cluster config:
	{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 10:38:52.142911   22547 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:38:52.144472   22547 out.go:177] * Starting "ha-763049" primary control-plane node in "ha-763049" cluster
	I0729 10:38:52.145678   22547 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:38:52.145706   22547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:38:52.145714   22547 cache.go:56] Caching tarball of preloaded images
	I0729 10:38:52.145777   22547 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:38:52.145786   22547 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:38:52.146065   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:38:52.146083   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json: {Name:mk8944791de2b6e7d06bc31c24e321168e26f676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:38:52.146208   22547 start.go:360] acquireMachinesLock for ha-763049: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:38:52.146234   22547 start.go:364] duration metric: took 14.885µs to acquireMachinesLock for "ha-763049"
	I0729 10:38:52.146249   22547 start.go:93] Provisioning new machine with config: &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:38:52.146300   22547 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 10:38:52.148831   22547 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:38:52.148959   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:38:52.148994   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:38:52.163354   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0729 10:38:52.163778   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:38:52.164357   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:38:52.164374   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:38:52.164697   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:38:52.164913   22547 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:38:52.165057   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:38:52.165195   22547 start.go:159] libmachine.API.Create for "ha-763049" (driver="kvm2")
	I0729 10:38:52.165224   22547 client.go:168] LocalClient.Create starting
	I0729 10:38:52.165253   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 10:38:52.165282   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:38:52.165295   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:38:52.165355   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 10:38:52.165372   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:38:52.165390   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:38:52.165405   22547 main.go:141] libmachine: Running pre-create checks...
	I0729 10:38:52.165418   22547 main.go:141] libmachine: (ha-763049) Calling .PreCreateCheck
	I0729 10:38:52.165764   22547 main.go:141] libmachine: (ha-763049) Calling .GetConfigRaw
	I0729 10:38:52.166158   22547 main.go:141] libmachine: Creating machine...
	I0729 10:38:52.166170   22547 main.go:141] libmachine: (ha-763049) Calling .Create
	I0729 10:38:52.166298   22547 main.go:141] libmachine: (ha-763049) Creating KVM machine...
	I0729 10:38:52.167495   22547 main.go:141] libmachine: (ha-763049) DBG | found existing default KVM network
	I0729 10:38:52.168190   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:52.168065   22570 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0729 10:38:52.168221   22547 main.go:141] libmachine: (ha-763049) DBG | created network xml: 
	I0729 10:38:52.168240   22547 main.go:141] libmachine: (ha-763049) DBG | <network>
	I0729 10:38:52.168248   22547 main.go:141] libmachine: (ha-763049) DBG |   <name>mk-ha-763049</name>
	I0729 10:38:52.168255   22547 main.go:141] libmachine: (ha-763049) DBG |   <dns enable='no'/>
	I0729 10:38:52.168261   22547 main.go:141] libmachine: (ha-763049) DBG |   
	I0729 10:38:52.168269   22547 main.go:141] libmachine: (ha-763049) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 10:38:52.168277   22547 main.go:141] libmachine: (ha-763049) DBG |     <dhcp>
	I0729 10:38:52.168288   22547 main.go:141] libmachine: (ha-763049) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 10:38:52.168300   22547 main.go:141] libmachine: (ha-763049) DBG |     </dhcp>
	I0729 10:38:52.168329   22547 main.go:141] libmachine: (ha-763049) DBG |   </ip>
	I0729 10:38:52.168340   22547 main.go:141] libmachine: (ha-763049) DBG |   
	I0729 10:38:52.168345   22547 main.go:141] libmachine: (ha-763049) DBG | </network>
	I0729 10:38:52.168350   22547 main.go:141] libmachine: (ha-763049) DBG | 
	I0729 10:38:52.173436   22547 main.go:141] libmachine: (ha-763049) DBG | trying to create private KVM network mk-ha-763049 192.168.39.0/24...
	I0729 10:38:52.239432   22547 main.go:141] libmachine: (ha-763049) DBG | private KVM network mk-ha-763049 192.168.39.0/24 created
	I0729 10:38:52.239455   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:52.239376   22570 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:38:52.239466   22547 main.go:141] libmachine: (ha-763049) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049 ...
	I0729 10:38:52.239507   22547 main.go:141] libmachine: (ha-763049) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:38:52.239618   22547 main.go:141] libmachine: (ha-763049) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 10:38:52.480346   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:52.480196   22570 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa...
	I0729 10:38:52.553287   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:52.553150   22570 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/ha-763049.rawdisk...
	I0729 10:38:52.553315   22547 main.go:141] libmachine: (ha-763049) DBG | Writing magic tar header
	I0729 10:38:52.553326   22547 main.go:141] libmachine: (ha-763049) DBG | Writing SSH key tar header
	I0729 10:38:52.553334   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:52.553267   22570 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049 ...
	I0729 10:38:52.553467   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049
	I0729 10:38:52.553479   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049 (perms=drwx------)
	I0729 10:38:52.553485   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 10:38:52.553493   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:38:52.553502   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 10:38:52.553512   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 10:38:52.553524   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home/jenkins
	I0729 10:38:52.553536   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 10:38:52.553570   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 10:38:52.553580   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 10:38:52.553588   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 10:38:52.553597   22547 main.go:141] libmachine: (ha-763049) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 10:38:52.553607   22547 main.go:141] libmachine: (ha-763049) DBG | Checking permissions on dir: /home
	I0729 10:38:52.553621   22547 main.go:141] libmachine: (ha-763049) DBG | Skipping /home - not owner
	I0729 10:38:52.553633   22547 main.go:141] libmachine: (ha-763049) Creating domain...
	I0729 10:38:52.554621   22547 main.go:141] libmachine: (ha-763049) define libvirt domain using xml: 
	I0729 10:38:52.554643   22547 main.go:141] libmachine: (ha-763049) <domain type='kvm'>
	I0729 10:38:52.554653   22547 main.go:141] libmachine: (ha-763049)   <name>ha-763049</name>
	I0729 10:38:52.554661   22547 main.go:141] libmachine: (ha-763049)   <memory unit='MiB'>2200</memory>
	I0729 10:38:52.554669   22547 main.go:141] libmachine: (ha-763049)   <vcpu>2</vcpu>
	I0729 10:38:52.554694   22547 main.go:141] libmachine: (ha-763049)   <features>
	I0729 10:38:52.554720   22547 main.go:141] libmachine: (ha-763049)     <acpi/>
	I0729 10:38:52.554733   22547 main.go:141] libmachine: (ha-763049)     <apic/>
	I0729 10:38:52.554741   22547 main.go:141] libmachine: (ha-763049)     <pae/>
	I0729 10:38:52.554754   22547 main.go:141] libmachine: (ha-763049)     
	I0729 10:38:52.554764   22547 main.go:141] libmachine: (ha-763049)   </features>
	I0729 10:38:52.554778   22547 main.go:141] libmachine: (ha-763049)   <cpu mode='host-passthrough'>
	I0729 10:38:52.554789   22547 main.go:141] libmachine: (ha-763049)   
	I0729 10:38:52.554806   22547 main.go:141] libmachine: (ha-763049)   </cpu>
	I0729 10:38:52.554817   22547 main.go:141] libmachine: (ha-763049)   <os>
	I0729 10:38:52.554824   22547 main.go:141] libmachine: (ha-763049)     <type>hvm</type>
	I0729 10:38:52.554834   22547 main.go:141] libmachine: (ha-763049)     <boot dev='cdrom'/>
	I0729 10:38:52.554841   22547 main.go:141] libmachine: (ha-763049)     <boot dev='hd'/>
	I0729 10:38:52.554851   22547 main.go:141] libmachine: (ha-763049)     <bootmenu enable='no'/>
	I0729 10:38:52.554861   22547 main.go:141] libmachine: (ha-763049)   </os>
	I0729 10:38:52.554889   22547 main.go:141] libmachine: (ha-763049)   <devices>
	I0729 10:38:52.554911   22547 main.go:141] libmachine: (ha-763049)     <disk type='file' device='cdrom'>
	I0729 10:38:52.554921   22547 main.go:141] libmachine: (ha-763049)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/boot2docker.iso'/>
	I0729 10:38:52.554930   22547 main.go:141] libmachine: (ha-763049)       <target dev='hdc' bus='scsi'/>
	I0729 10:38:52.554938   22547 main.go:141] libmachine: (ha-763049)       <readonly/>
	I0729 10:38:52.554942   22547 main.go:141] libmachine: (ha-763049)     </disk>
	I0729 10:38:52.554949   22547 main.go:141] libmachine: (ha-763049)     <disk type='file' device='disk'>
	I0729 10:38:52.554954   22547 main.go:141] libmachine: (ha-763049)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 10:38:52.554964   22547 main.go:141] libmachine: (ha-763049)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/ha-763049.rawdisk'/>
	I0729 10:38:52.554969   22547 main.go:141] libmachine: (ha-763049)       <target dev='hda' bus='virtio'/>
	I0729 10:38:52.554973   22547 main.go:141] libmachine: (ha-763049)     </disk>
	I0729 10:38:52.554981   22547 main.go:141] libmachine: (ha-763049)     <interface type='network'>
	I0729 10:38:52.554987   22547 main.go:141] libmachine: (ha-763049)       <source network='mk-ha-763049'/>
	I0729 10:38:52.554993   22547 main.go:141] libmachine: (ha-763049)       <model type='virtio'/>
	I0729 10:38:52.554999   22547 main.go:141] libmachine: (ha-763049)     </interface>
	I0729 10:38:52.555005   22547 main.go:141] libmachine: (ha-763049)     <interface type='network'>
	I0729 10:38:52.555017   22547 main.go:141] libmachine: (ha-763049)       <source network='default'/>
	I0729 10:38:52.555029   22547 main.go:141] libmachine: (ha-763049)       <model type='virtio'/>
	I0729 10:38:52.555037   22547 main.go:141] libmachine: (ha-763049)     </interface>
	I0729 10:38:52.555047   22547 main.go:141] libmachine: (ha-763049)     <serial type='pty'>
	I0729 10:38:52.555058   22547 main.go:141] libmachine: (ha-763049)       <target port='0'/>
	I0729 10:38:52.555068   22547 main.go:141] libmachine: (ha-763049)     </serial>
	I0729 10:38:52.555082   22547 main.go:141] libmachine: (ha-763049)     <console type='pty'>
	I0729 10:38:52.555093   22547 main.go:141] libmachine: (ha-763049)       <target type='serial' port='0'/>
	I0729 10:38:52.555124   22547 main.go:141] libmachine: (ha-763049)     </console>
	I0729 10:38:52.555141   22547 main.go:141] libmachine: (ha-763049)     <rng model='virtio'>
	I0729 10:38:52.555155   22547 main.go:141] libmachine: (ha-763049)       <backend model='random'>/dev/random</backend>
	I0729 10:38:52.555165   22547 main.go:141] libmachine: (ha-763049)     </rng>
	I0729 10:38:52.555170   22547 main.go:141] libmachine: (ha-763049)     
	I0729 10:38:52.555183   22547 main.go:141] libmachine: (ha-763049)     
	I0729 10:38:52.555196   22547 main.go:141] libmachine: (ha-763049)   </devices>
	I0729 10:38:52.555202   22547 main.go:141] libmachine: (ha-763049) </domain>
	I0729 10:38:52.555215   22547 main.go:141] libmachine: (ha-763049) 
	I0729 10:38:52.559449   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:ee:0c:a6 in network default
	I0729 10:38:52.560041   22547 main.go:141] libmachine: (ha-763049) Ensuring networks are active...
	I0729 10:38:52.560064   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:52.560625   22547 main.go:141] libmachine: (ha-763049) Ensuring network default is active
	I0729 10:38:52.560910   22547 main.go:141] libmachine: (ha-763049) Ensuring network mk-ha-763049 is active
	I0729 10:38:52.561453   22547 main.go:141] libmachine: (ha-763049) Getting domain xml...
	I0729 10:38:52.562179   22547 main.go:141] libmachine: (ha-763049) Creating domain...
	I0729 10:38:53.735908   22547 main.go:141] libmachine: (ha-763049) Waiting to get IP...
	I0729 10:38:53.736598   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:53.736950   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:53.736989   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:53.736936   22570 retry.go:31] will retry after 260.647868ms: waiting for machine to come up
	I0729 10:38:53.999384   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:53.999821   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:53.999848   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:53.999778   22570 retry.go:31] will retry after 243.571937ms: waiting for machine to come up
	I0729 10:38:54.245332   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:54.245771   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:54.245803   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:54.245719   22570 retry.go:31] will retry after 477.405182ms: waiting for machine to come up
	I0729 10:38:54.724279   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:54.724733   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:54.724761   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:54.724686   22570 retry.go:31] will retry after 464.831075ms: waiting for machine to come up
	I0729 10:38:55.191623   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:55.192040   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:55.192066   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:55.191991   22570 retry.go:31] will retry after 536.612949ms: waiting for machine to come up
	I0729 10:38:55.729749   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:55.730165   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:55.730193   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:55.730119   22570 retry.go:31] will retry after 906.452891ms: waiting for machine to come up
	I0729 10:38:56.638140   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:56.638490   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:56.638535   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:56.638457   22570 retry.go:31] will retry after 973.555192ms: waiting for machine to come up
	I0729 10:38:57.613156   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:57.613603   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:57.613629   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:57.613567   22570 retry.go:31] will retry after 1.052023326s: waiting for machine to come up
	I0729 10:38:58.666683   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:58.667140   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:58.667161   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:58.667090   22570 retry.go:31] will retry after 1.254632627s: waiting for machine to come up
	I0729 10:38:59.923484   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:38:59.923837   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:38:59.923874   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:38:59.923819   22570 retry.go:31] will retry after 1.530478535s: waiting for machine to come up
	I0729 10:39:01.455809   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:01.456172   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:39:01.456199   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:39:01.456125   22570 retry.go:31] will retry after 2.507484818s: waiting for machine to come up
	I0729 10:39:03.966003   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:03.966593   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:39:03.966619   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:39:03.966559   22570 retry.go:31] will retry after 2.741723138s: waiting for machine to come up
	I0729 10:39:06.711555   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:06.712166   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:39:06.712197   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:39:06.712118   22570 retry.go:31] will retry after 3.481820681s: waiting for machine to come up
	I0729 10:39:10.195728   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:10.196102   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find current IP address of domain ha-763049 in network mk-ha-763049
	I0729 10:39:10.196129   22547 main.go:141] libmachine: (ha-763049) DBG | I0729 10:39:10.196040   22570 retry.go:31] will retry after 5.393944744s: waiting for machine to come up
	I0729 10:39:15.593535   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.593908   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has current primary IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.593926   22547 main.go:141] libmachine: (ha-763049) Found IP for machine: 192.168.39.68
	I0729 10:39:15.593939   22547 main.go:141] libmachine: (ha-763049) Reserving static IP address...
	I0729 10:39:15.594243   22547 main.go:141] libmachine: (ha-763049) DBG | unable to find host DHCP lease matching {name: "ha-763049", mac: "52:54:00:6d:89:08", ip: "192.168.39.68"} in network mk-ha-763049
	I0729 10:39:15.667923   22547 main.go:141] libmachine: (ha-763049) DBG | Getting to WaitForSSH function...
	I0729 10:39:15.667953   22547 main.go:141] libmachine: (ha-763049) Reserved static IP address: 192.168.39.68
	I0729 10:39:15.667966   22547 main.go:141] libmachine: (ha-763049) Waiting for SSH to be available...
	I0729 10:39:15.670365   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.670825   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:15.670866   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.670929   22547 main.go:141] libmachine: (ha-763049) DBG | Using SSH client type: external
	I0729 10:39:15.670947   22547 main.go:141] libmachine: (ha-763049) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa (-rw-------)
	I0729 10:39:15.671125   22547 main.go:141] libmachine: (ha-763049) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:39:15.671150   22547 main.go:141] libmachine: (ha-763049) DBG | About to run SSH command:
	I0729 10:39:15.671164   22547 main.go:141] libmachine: (ha-763049) DBG | exit 0
	I0729 10:39:15.794907   22547 main.go:141] libmachine: (ha-763049) DBG | SSH cmd err, output: <nil>: 
	I0729 10:39:15.795154   22547 main.go:141] libmachine: (ha-763049) KVM machine creation complete!
	I0729 10:39:15.795476   22547 main.go:141] libmachine: (ha-763049) Calling .GetConfigRaw
	I0729 10:39:15.796030   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:15.796287   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:15.796507   22547 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 10:39:15.796521   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:39:15.797891   22547 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 10:39:15.797909   22547 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 10:39:15.797916   22547 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 10:39:15.797924   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:15.800864   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.801186   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:15.801220   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.801409   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:15.801619   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:15.801777   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:15.801928   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:15.802109   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:15.802328   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:15.802340   22547 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 10:39:15.906351   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:39:15.906381   22547 main.go:141] libmachine: Detecting the provisioner...
	I0729 10:39:15.906391   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:15.909047   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.909393   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:15.909418   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:15.909590   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:15.909788   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:15.909938   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:15.910068   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:15.910223   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:15.910438   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:15.910450   22547 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 10:39:16.015990   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 10:39:16.016055   22547 main.go:141] libmachine: found compatible host: buildroot
	I0729 10:39:16.016062   22547 main.go:141] libmachine: Provisioning with buildroot...
	I0729 10:39:16.016069   22547 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:39:16.016332   22547 buildroot.go:166] provisioning hostname "ha-763049"
	I0729 10:39:16.016364   22547 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:39:16.016513   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.018985   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.019250   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.019289   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.019356   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.019528   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.019701   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.019878   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.020028   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:16.020187   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:16.020197   22547 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-763049 && echo "ha-763049" | sudo tee /etc/hostname
	I0729 10:39:16.137232   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-763049
	
	I0729 10:39:16.137259   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.139762   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.140063   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.140091   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.140247   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.140469   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.140641   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.140844   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.141007   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:16.141187   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:16.141203   22547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-763049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-763049/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-763049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:39:16.252141   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:39:16.252178   22547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 10:39:16.252220   22547 buildroot.go:174] setting up certificates
	I0729 10:39:16.252233   22547 provision.go:84] configureAuth start
	I0729 10:39:16.252248   22547 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:39:16.252538   22547 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:39:16.255138   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.255477   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.255498   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.255725   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.257976   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.258368   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.258394   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.258594   22547 provision.go:143] copyHostCerts
	I0729 10:39:16.258627   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:39:16.258672   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 10:39:16.258681   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:39:16.258783   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 10:39:16.258902   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:39:16.258922   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 10:39:16.258928   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:39:16.258957   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 10:39:16.258995   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:39:16.259011   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 10:39:16.259017   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:39:16.259039   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 10:39:16.259089   22547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.ha-763049 san=[127.0.0.1 192.168.39.68 ha-763049 localhost minikube]
	I0729 10:39:16.327424   22547 provision.go:177] copyRemoteCerts
	I0729 10:39:16.327477   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:39:16.327500   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.330353   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.330638   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.330674   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.330826   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.331034   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.331193   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.331319   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:16.413579   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 10:39:16.413642   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:39:16.438013   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 10:39:16.438077   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 10:39:16.462631   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 10:39:16.462694   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 10:39:16.486583   22547 provision.go:87] duration metric: took 234.338734ms to configureAuth
	I0729 10:39:16.486610   22547 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:39:16.486819   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:39:16.486904   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.489620   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.489972   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.490016   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.490225   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.490416   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.490562   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.490677   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.490902   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:16.491081   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:16.491099   22547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:39:16.769524   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:39:16.769547   22547 main.go:141] libmachine: Checking connection to Docker...
	I0729 10:39:16.769554   22547 main.go:141] libmachine: (ha-763049) Calling .GetURL
	I0729 10:39:16.770791   22547 main.go:141] libmachine: (ha-763049) DBG | Using libvirt version 6000000
	I0729 10:39:16.774633   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.775161   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.775181   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.775359   22547 main.go:141] libmachine: Docker is up and running!
	I0729 10:39:16.775368   22547 main.go:141] libmachine: Reticulating splines...
	I0729 10:39:16.775374   22547 client.go:171] duration metric: took 24.610141226s to LocalClient.Create
	I0729 10:39:16.775399   22547 start.go:167] duration metric: took 24.610203669s to libmachine.API.Create "ha-763049"
	I0729 10:39:16.775411   22547 start.go:293] postStartSetup for "ha-763049" (driver="kvm2")
	I0729 10:39:16.775423   22547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:39:16.775461   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:16.775699   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:39:16.775723   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.778044   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.778401   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.778427   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.778534   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.778727   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.778901   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.779070   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:16.861787   22547 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:39:16.866195   22547 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:39:16.866219   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 10:39:16.866291   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 10:39:16.866380   22547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 10:39:16.866391   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /etc/ssl/certs/110642.pem
	I0729 10:39:16.866495   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:39:16.876269   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:39:16.900206   22547 start.go:296] duration metric: took 124.78063ms for postStartSetup
	I0729 10:39:16.900263   22547 main.go:141] libmachine: (ha-763049) Calling .GetConfigRaw
	I0729 10:39:16.900853   22547 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:39:16.903365   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.903650   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.903680   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.903860   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:39:16.904020   22547 start.go:128] duration metric: took 24.757712223s to createHost
	I0729 10:39:16.904041   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:16.906106   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.906426   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:16.906447   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:16.906580   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:16.906739   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.906932   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:16.907049   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:16.907240   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:39:16.907452   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:39:16.907470   22547 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:39:17.011414   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249556.990672847
	
	I0729 10:39:17.011439   22547 fix.go:216] guest clock: 1722249556.990672847
	I0729 10:39:17.011448   22547 fix.go:229] Guest: 2024-07-29 10:39:16.990672847 +0000 UTC Remote: 2024-07-29 10:39:16.904031905 +0000 UTC m=+24.860037397 (delta=86.640942ms)
	I0729 10:39:17.011474   22547 fix.go:200] guest clock delta is within tolerance: 86.640942ms
	I0729 10:39:17.011479   22547 start.go:83] releasing machines lock for "ha-763049", held for 24.8652374s
	I0729 10:39:17.011496   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:17.011779   22547 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:39:17.014065   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.014378   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:17.014412   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.014510   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:17.014941   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:17.015161   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:17.015268   22547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:39:17.015304   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:17.015411   22547 ssh_runner.go:195] Run: cat /version.json
	I0729 10:39:17.015441   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:17.017842   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.018049   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.018194   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:17.018227   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.018352   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:17.018429   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:17.018452   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:17.018519   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:17.018632   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:17.018719   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:17.018777   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:17.018847   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:17.018917   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:17.019017   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:17.114171   22547 ssh_runner.go:195] Run: systemctl --version
	I0729 10:39:17.120439   22547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:39:17.281213   22547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:39:17.287189   22547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:39:17.287259   22547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:39:17.303804   22547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:39:17.303828   22547 start.go:495] detecting cgroup driver to use...
	I0729 10:39:17.303888   22547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:39:17.320281   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:39:17.334675   22547 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:39:17.334752   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:39:17.349587   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:39:17.366548   22547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:39:17.492781   22547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:39:17.656837   22547 docker.go:233] disabling docker service ...
	I0729 10:39:17.656936   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:39:17.671794   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:39:17.685030   22547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:39:17.815598   22547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:39:17.942350   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:39:17.956570   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:39:17.975328   22547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:39:17.975394   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:17.985796   22547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:39:17.985891   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:17.996359   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:18.006652   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:18.016976   22547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:39:18.027669   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:18.038037   22547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:18.055454   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:39:18.065608   22547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:39:18.075028   22547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 10:39:18.075090   22547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 10:39:18.089097   22547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:39:18.098583   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:39:18.223266   22547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:39:18.383865   22547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:39:18.383944   22547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:39:18.389094   22547 start.go:563] Will wait 60s for crictl version
	I0729 10:39:18.389150   22547 ssh_runner.go:195] Run: which crictl
	I0729 10:39:18.393115   22547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:39:18.432138   22547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:39:18.432214   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:39:18.460406   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:39:18.490525   22547 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:39:18.491777   22547 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:39:18.494271   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:18.494574   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:18.494593   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:18.494801   22547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:39:18.498974   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:39:18.512326   22547 kubeadm.go:883] updating cluster {Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 10:39:18.512428   22547 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:39:18.512477   22547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:39:18.550499   22547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 10:39:18.550569   22547 ssh_runner.go:195] Run: which lz4
	I0729 10:39:18.554554   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 10:39:18.554636   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 10:39:18.558721   22547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:39:18.558750   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 10:39:20.028271   22547 crio.go:462] duration metric: took 1.473651879s to copy over tarball
	I0729 10:39:20.028361   22547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:39:22.189762   22547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.161366488s)
	I0729 10:39:22.189805   22547 crio.go:469] duration metric: took 2.161483142s to extract the tarball
	I0729 10:39:22.189816   22547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:39:22.228114   22547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:39:22.272938   22547 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 10:39:22.272962   22547 cache_images.go:84] Images are preloaded, skipping loading
	I0729 10:39:22.272972   22547 kubeadm.go:934] updating node { 192.168.39.68 8443 v1.30.3 crio true true} ...
	I0729 10:39:22.273094   22547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-763049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:39:22.273173   22547 ssh_runner.go:195] Run: crio config
	I0729 10:39:22.316250   22547 cni.go:84] Creating CNI manager for ""
	I0729 10:39:22.316273   22547 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 10:39:22.316283   22547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:39:22.316308   22547 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-763049 NodeName:ha-763049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:39:22.316471   22547 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-763049"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:39:22.316499   22547 kube-vip.go:115] generating kube-vip config ...
	I0729 10:39:22.316550   22547 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 10:39:22.333161   22547 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 10:39:22.333284   22547 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 10:39:22.333350   22547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:39:22.343604   22547 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:39:22.343666   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 10:39:22.353543   22547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0729 10:39:22.371552   22547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:39:22.388445   22547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0729 10:39:22.405844   22547 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 10:39:22.422807   22547 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 10:39:22.426602   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:39:22.439119   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:39:22.572985   22547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:39:22.589815   22547 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049 for IP: 192.168.39.68
	I0729 10:39:22.589841   22547 certs.go:194] generating shared ca certs ...
	I0729 10:39:22.589872   22547 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:22.590034   22547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 10:39:22.590091   22547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 10:39:22.590106   22547 certs.go:256] generating profile certs ...
	I0729 10:39:22.590167   22547 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key
	I0729 10:39:22.590184   22547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt with IP's: []
	I0729 10:39:22.798588   22547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt ...
	I0729 10:39:22.798617   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt: {Name:mk8726fe8d9d70191efa461a421de8e0ef61240d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:22.798814   22547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key ...
	I0729 10:39:22.798832   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key: {Name:mk794f5476902a4cf64a0422faec2c5b4ffae7eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:22.798936   22547 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.be5b3cae
	I0729 10:39:22.798958   22547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.be5b3cae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.254]
	I0729 10:39:22.933457   22547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.be5b3cae ...
	I0729 10:39:22.933483   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.be5b3cae: {Name:mk6d7aa030326f6063141278dafe1a87a05ebef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:22.933649   22547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.be5b3cae ...
	I0729 10:39:22.933668   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.be5b3cae: {Name:mkfa65284d28fb8ca272ca0f1ccf2a74e2be20ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:22.933756   22547 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.be5b3cae -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt
	I0729 10:39:22.933863   22547 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.be5b3cae -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key
	I0729 10:39:22.933936   22547 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key
	I0729 10:39:22.933957   22547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt with IP's: []
	I0729 10:39:23.030337   22547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt ...
	I0729 10:39:23.030362   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt: {Name:mk78c05815a3562526ae4c6c617ba0906af3cc32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:23.030525   22547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key ...
	I0729 10:39:23.030540   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key: {Name:mkec8714f7255ef23612611d8205d8b099bcce62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:23.030637   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 10:39:23.030661   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 10:39:23.030679   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 10:39:23.030715   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 10:39:23.030734   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 10:39:23.030752   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 10:39:23.030771   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 10:39:23.030788   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 10:39:23.030855   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 10:39:23.030905   22547 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 10:39:23.030924   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:39:23.030957   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:39:23.030986   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:39:23.031014   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 10:39:23.031075   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:39:23.031108   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem -> /usr/share/ca-certificates/11064.pem
	I0729 10:39:23.031126   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /usr/share/ca-certificates/110642.pem
	I0729 10:39:23.031144   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:39:23.031689   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:39:23.071341   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:39:23.100071   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:39:23.129684   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:39:23.156536   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 10:39:23.180536   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 10:39:23.204104   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:39:23.227728   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:39:23.252831   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 10:39:23.276665   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 10:39:23.301230   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:39:23.325711   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:39:23.346356   22547 ssh_runner.go:195] Run: openssl version
	I0729 10:39:23.354160   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 10:39:23.369101   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 10:39:23.380780   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 10:39:23.380851   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 10:39:23.389325   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 10:39:23.404598   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 10:39:23.417480   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 10:39:23.427076   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 10:39:23.427142   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 10:39:23.434240   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:39:23.445181   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:39:23.456244   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:39:23.461063   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:39:23.461109   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:39:23.466971   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:39:23.477718   22547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:39:23.481837   22547 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 10:39:23.481893   22547 kubeadm.go:392] StartCluster: {Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:39:23.481969   22547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 10:39:23.482037   22547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 10:39:23.520583   22547 cri.go:89] found id: ""
	I0729 10:39:23.520655   22547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:39:23.530886   22547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:39:23.541172   22547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:39:23.550920   22547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:39:23.550938   22547 kubeadm.go:157] found existing configuration files:
	
	I0729 10:39:23.550992   22547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 10:39:23.560298   22547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:39:23.560355   22547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:39:23.569852   22547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 10:39:23.579097   22547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:39:23.579181   22547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:39:23.588698   22547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 10:39:23.597909   22547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:39:23.597972   22547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:39:23.607922   22547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 10:39:23.617362   22547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:39:23.617417   22547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:39:23.627121   22547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:39:23.733375   22547 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 10:39:23.733497   22547 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:39:23.875084   22547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:39:23.875227   22547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:39:23.875391   22547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:39:24.085855   22547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:39:24.378338   22547 out.go:204]   - Generating certificates and keys ...
	I0729 10:39:24.378467   22547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:39:24.378574   22547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:39:24.378748   22547 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 10:39:24.405756   22547 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 10:39:24.456478   22547 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 10:39:24.596679   22547 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 10:39:24.702522   22547 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 10:39:24.702675   22547 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-763049 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I0729 10:39:24.764306   22547 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 10:39:24.764447   22547 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-763049 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I0729 10:39:24.971011   22547 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 10:39:25.207262   22547 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 10:39:25.362609   22547 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 10:39:25.362695   22547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:39:25.506790   22547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:39:25.708122   22547 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 10:39:26.061258   22547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:39:26.117983   22547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:39:26.454725   22547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:39:26.456829   22547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:39:26.459297   22547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:39:26.461316   22547 out.go:204]   - Booting up control plane ...
	I0729 10:39:26.461429   22547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:39:26.461529   22547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:39:26.461638   22547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:39:26.475981   22547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:39:26.476848   22547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:39:26.476889   22547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:39:26.606132   22547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 10:39:26.606225   22547 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 10:39:27.119209   22547 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.740178ms
	I0729 10:39:27.119279   22547 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 10:39:33.233694   22547 kubeadm.go:310] [api-check] The API server is healthy after 6.118585367s
	I0729 10:39:33.253736   22547 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:39:33.270849   22547 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:39:33.800546   22547 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:39:33.800738   22547 kubeadm.go:310] [mark-control-plane] Marking the node ha-763049 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:39:33.815384   22547 kubeadm.go:310] [bootstrap-token] Using token: 6vmhhd.ltmhhdran4o8516u
	I0729 10:39:33.816915   22547 out.go:204]   - Configuring RBAC rules ...
	I0729 10:39:33.817040   22547 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:39:33.824083   22547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:39:33.838609   22547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:39:33.842317   22547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:39:33.846144   22547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:39:33.849951   22547 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:39:33.867391   22547 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:39:34.111922   22547 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:39:34.641225   22547 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:39:34.642031   22547 kubeadm.go:310] 
	I0729 10:39:34.642097   22547 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:39:34.642118   22547 kubeadm.go:310] 
	I0729 10:39:34.642197   22547 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:39:34.642204   22547 kubeadm.go:310] 
	I0729 10:39:34.642289   22547 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:39:34.642363   22547 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:39:34.642414   22547 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:39:34.642421   22547 kubeadm.go:310] 
	I0729 10:39:34.642466   22547 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:39:34.642481   22547 kubeadm.go:310] 
	I0729 10:39:34.642522   22547 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:39:34.642528   22547 kubeadm.go:310] 
	I0729 10:39:34.642569   22547 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:39:34.642643   22547 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:39:34.642719   22547 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:39:34.642740   22547 kubeadm.go:310] 
	I0729 10:39:34.642960   22547 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:39:34.643039   22547 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:39:34.643047   22547 kubeadm.go:310] 
	I0729 10:39:34.643122   22547 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6vmhhd.ltmhhdran4o8516u \
	I0729 10:39:34.643208   22547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 10:39:34.643227   22547 kubeadm.go:310] 	--control-plane 
	I0729 10:39:34.643242   22547 kubeadm.go:310] 
	I0729 10:39:34.643372   22547 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:39:34.643384   22547 kubeadm.go:310] 
	I0729 10:39:34.643503   22547 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6vmhhd.ltmhhdran4o8516u \
	I0729 10:39:34.643663   22547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 10:39:34.644251   22547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:39:34.644323   22547 cni.go:84] Creating CNI manager for ""
	I0729 10:39:34.644340   22547 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 10:39:34.646388   22547 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 10:39:34.647902   22547 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 10:39:34.658214   22547 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 10:39:34.658237   22547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 10:39:34.679004   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 10:39:35.041167   22547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:39:35.041240   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:35.041267   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-763049 minikube.k8s.io/updated_at=2024_07_29T10_39_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=ha-763049 minikube.k8s.io/primary=true
	I0729 10:39:35.063579   22547 ops.go:34] apiserver oom_adj: -16
	I0729 10:39:35.190668   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:35.690992   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:36.190905   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:36.691217   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:37.190922   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:37.691236   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:38.191565   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:38.691530   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:39.190984   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:39.691059   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:40.190772   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:40.691719   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:41.190847   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:41.691012   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:42.191207   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:42.691224   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:43.191590   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:43.691339   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:44.190939   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:44.691132   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:45.191225   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:45.691501   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:46.190882   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:46.691697   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:47.191478   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:47.690991   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:48.191466   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:48.325149   22547 kubeadm.go:1113] duration metric: took 13.28397001s to wait for elevateKubeSystemPrivileges
	I0729 10:39:48.325188   22547 kubeadm.go:394] duration metric: took 24.843296888s to StartCluster
	I0729 10:39:48.325210   22547 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:48.325340   22547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:39:48.326287   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:48.326542   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 10:39:48.326552   22547 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:39:48.326578   22547 start.go:241] waiting for startup goroutines ...
	I0729 10:39:48.326586   22547 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 10:39:48.326645   22547 addons.go:69] Setting storage-provisioner=true in profile "ha-763049"
	I0729 10:39:48.326675   22547 addons.go:234] Setting addon storage-provisioner=true in "ha-763049"
	I0729 10:39:48.326687   22547 addons.go:69] Setting default-storageclass=true in profile "ha-763049"
	I0729 10:39:48.326717   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:39:48.326752   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:39:48.326793   22547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-763049"
	I0729 10:39:48.327142   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:48.327154   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:48.327175   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:48.327176   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:48.341990   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0729 10:39:48.342068   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32997
	I0729 10:39:48.342429   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:48.342537   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:48.343025   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:48.343044   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:48.343170   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:48.343194   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:48.343400   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:48.343517   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:48.343560   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:39:48.344094   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:48.344134   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:48.345884   22547 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:39:48.346221   22547 kapi.go:59] client config for ha-763049: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt", KeyFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key", CAFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:39:48.346725   22547 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 10:39:48.346995   22547 addons.go:234] Setting addon default-storageclass=true in "ha-763049"
	I0729 10:39:48.347038   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:39:48.347404   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:48.347436   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:48.359956   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0729 10:39:48.360526   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:48.361028   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:48.361047   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:48.361429   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:48.361622   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:39:48.362323   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0729 10:39:48.362722   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:48.363239   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:48.363262   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:48.363282   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:48.363605   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:48.364119   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:48.364165   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:48.365217   22547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:39:48.367384   22547 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:39:48.367405   22547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:39:48.367435   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:48.371124   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:48.371537   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:48.371575   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:48.371728   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:48.371917   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:48.372038   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:48.372216   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:48.380718   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I0729 10:39:48.381143   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:48.381629   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:48.381643   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:48.381934   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:48.382148   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:39:48.383621   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:39:48.383838   22547 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:39:48.383854   22547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:39:48.383872   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:39:48.386516   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:48.386912   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:39:48.386938   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:39:48.387072   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:39:48.387239   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:39:48.387389   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:39:48.387517   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:39:48.465561   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 10:39:48.532068   22547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:39:48.580774   22547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:39:48.828047   22547 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 10:39:49.029755   22547 main.go:141] libmachine: Making call to close driver server
	I0729 10:39:49.029789   22547 main.go:141] libmachine: (ha-763049) Calling .Close
	I0729 10:39:49.029797   22547 main.go:141] libmachine: Making call to close driver server
	I0729 10:39:49.029815   22547 main.go:141] libmachine: (ha-763049) Calling .Close
	I0729 10:39:49.030066   22547 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:39:49.030072   22547 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:39:49.030079   22547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:39:49.030086   22547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:39:49.030097   22547 main.go:141] libmachine: Making call to close driver server
	I0729 10:39:49.030106   22547 main.go:141] libmachine: (ha-763049) Calling .Close
	I0729 10:39:49.030088   22547 main.go:141] libmachine: Making call to close driver server
	I0729 10:39:49.030200   22547 main.go:141] libmachine: (ha-763049) Calling .Close
	I0729 10:39:49.030303   22547 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:39:49.030315   22547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:39:49.030443   22547 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:39:49.030462   22547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:39:49.030579   22547 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 10:39:49.030591   22547 round_trippers.go:469] Request Headers:
	I0729 10:39:49.030603   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:39:49.030614   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:39:49.043164   22547 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0729 10:39:49.043970   22547 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 10:39:49.043991   22547 round_trippers.go:469] Request Headers:
	I0729 10:39:49.044001   22547 round_trippers.go:473]     Content-Type: application/json
	I0729 10:39:49.044006   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:39:49.044012   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:39:49.047889   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:39:49.048057   22547 main.go:141] libmachine: Making call to close driver server
	I0729 10:39:49.048079   22547 main.go:141] libmachine: (ha-763049) Calling .Close
	I0729 10:39:49.048324   22547 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:39:49.048348   22547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:39:49.048350   22547 main.go:141] libmachine: (ha-763049) DBG | Closing plugin on server side
	I0729 10:39:49.050301   22547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 10:39:49.051545   22547 addons.go:510] duration metric: took 724.956749ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 10:39:49.051574   22547 start.go:246] waiting for cluster config update ...
	I0729 10:39:49.051584   22547 start.go:255] writing updated cluster config ...
	I0729 10:39:49.053174   22547 out.go:177] 
	I0729 10:39:49.054601   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:39:49.054686   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:39:49.056248   22547 out.go:177] * Starting "ha-763049-m02" control-plane node in "ha-763049" cluster
	I0729 10:39:49.057622   22547 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:39:49.057648   22547 cache.go:56] Caching tarball of preloaded images
	I0729 10:39:49.057758   22547 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:39:49.057772   22547 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:39:49.057863   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:39:49.058064   22547 start.go:360] acquireMachinesLock for ha-763049-m02: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:39:49.058126   22547 start.go:364] duration metric: took 32.207µs to acquireMachinesLock for "ha-763049-m02"
	I0729 10:39:49.058145   22547 start.go:93] Provisioning new machine with config: &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:39:49.058213   22547 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 10:39:49.059776   22547 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:39:49.059853   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:39:49.059878   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:39:49.074210   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42155
	I0729 10:39:49.074648   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:39:49.075131   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:39:49.075154   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:39:49.075459   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:39:49.075616   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetMachineName
	I0729 10:39:49.075762   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:39:49.075927   22547 start.go:159] libmachine.API.Create for "ha-763049" (driver="kvm2")
	I0729 10:39:49.075950   22547 client.go:168] LocalClient.Create starting
	I0729 10:39:49.075982   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 10:39:49.076019   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:39:49.076032   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:39:49.076079   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 10:39:49.076104   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:39:49.076114   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:39:49.076135   22547 main.go:141] libmachine: Running pre-create checks...
	I0729 10:39:49.076143   22547 main.go:141] libmachine: (ha-763049-m02) Calling .PreCreateCheck
	I0729 10:39:49.076282   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetConfigRaw
	I0729 10:39:49.076659   22547 main.go:141] libmachine: Creating machine...
	I0729 10:39:49.076673   22547 main.go:141] libmachine: (ha-763049-m02) Calling .Create
	I0729 10:39:49.076786   22547 main.go:141] libmachine: (ha-763049-m02) Creating KVM machine...
	I0729 10:39:49.077925   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found existing default KVM network
	I0729 10:39:49.078122   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found existing private KVM network mk-ha-763049
	I0729 10:39:49.078239   22547 main.go:141] libmachine: (ha-763049-m02) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02 ...
	I0729 10:39:49.078268   22547 main.go:141] libmachine: (ha-763049-m02) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:39:49.078331   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:49.078232   22945 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:39:49.078439   22547 main.go:141] libmachine: (ha-763049-m02) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 10:39:49.305736   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:49.305617   22945 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa...
	I0729 10:39:49.543100   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:49.542923   22945 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/ha-763049-m02.rawdisk...
	I0729 10:39:49.543134   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Writing magic tar header
	I0729 10:39:49.543150   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Writing SSH key tar header
	I0729 10:39:49.543164   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:49.543032   22945 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02 ...
	I0729 10:39:49.543180   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02
	I0729 10:39:49.543218   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02 (perms=drwx------)
	I0729 10:39:49.543244   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 10:39:49.543256   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 10:39:49.543284   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 10:39:49.543314   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:39:49.543335   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 10:39:49.543350   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 10:39:49.543362   22547 main.go:141] libmachine: (ha-763049-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 10:39:49.543370   22547 main.go:141] libmachine: (ha-763049-m02) Creating domain...
	I0729 10:39:49.543380   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 10:39:49.543391   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 10:39:49.543404   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 10:39:49.543415   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Checking permissions on dir: /home
	I0729 10:39:49.543432   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Skipping /home - not owner
	I0729 10:39:49.544268   22547 main.go:141] libmachine: (ha-763049-m02) define libvirt domain using xml: 
	I0729 10:39:49.544287   22547 main.go:141] libmachine: (ha-763049-m02) <domain type='kvm'>
	I0729 10:39:49.544322   22547 main.go:141] libmachine: (ha-763049-m02)   <name>ha-763049-m02</name>
	I0729 10:39:49.544347   22547 main.go:141] libmachine: (ha-763049-m02)   <memory unit='MiB'>2200</memory>
	I0729 10:39:49.544360   22547 main.go:141] libmachine: (ha-763049-m02)   <vcpu>2</vcpu>
	I0729 10:39:49.544366   22547 main.go:141] libmachine: (ha-763049-m02)   <features>
	I0729 10:39:49.544375   22547 main.go:141] libmachine: (ha-763049-m02)     <acpi/>
	I0729 10:39:49.544385   22547 main.go:141] libmachine: (ha-763049-m02)     <apic/>
	I0729 10:39:49.544394   22547 main.go:141] libmachine: (ha-763049-m02)     <pae/>
	I0729 10:39:49.544404   22547 main.go:141] libmachine: (ha-763049-m02)     
	I0729 10:39:49.544411   22547 main.go:141] libmachine: (ha-763049-m02)   </features>
	I0729 10:39:49.544422   22547 main.go:141] libmachine: (ha-763049-m02)   <cpu mode='host-passthrough'>
	I0729 10:39:49.544432   22547 main.go:141] libmachine: (ha-763049-m02)   
	I0729 10:39:49.544444   22547 main.go:141] libmachine: (ha-763049-m02)   </cpu>
	I0729 10:39:49.544455   22547 main.go:141] libmachine: (ha-763049-m02)   <os>
	I0729 10:39:49.544463   22547 main.go:141] libmachine: (ha-763049-m02)     <type>hvm</type>
	I0729 10:39:49.544474   22547 main.go:141] libmachine: (ha-763049-m02)     <boot dev='cdrom'/>
	I0729 10:39:49.544483   22547 main.go:141] libmachine: (ha-763049-m02)     <boot dev='hd'/>
	I0729 10:39:49.544492   22547 main.go:141] libmachine: (ha-763049-m02)     <bootmenu enable='no'/>
	I0729 10:39:49.544501   22547 main.go:141] libmachine: (ha-763049-m02)   </os>
	I0729 10:39:49.544510   22547 main.go:141] libmachine: (ha-763049-m02)   <devices>
	I0729 10:39:49.544520   22547 main.go:141] libmachine: (ha-763049-m02)     <disk type='file' device='cdrom'>
	I0729 10:39:49.544540   22547 main.go:141] libmachine: (ha-763049-m02)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/boot2docker.iso'/>
	I0729 10:39:49.544560   22547 main.go:141] libmachine: (ha-763049-m02)       <target dev='hdc' bus='scsi'/>
	I0729 10:39:49.544569   22547 main.go:141] libmachine: (ha-763049-m02)       <readonly/>
	I0729 10:39:49.544576   22547 main.go:141] libmachine: (ha-763049-m02)     </disk>
	I0729 10:39:49.544586   22547 main.go:141] libmachine: (ha-763049-m02)     <disk type='file' device='disk'>
	I0729 10:39:49.544598   22547 main.go:141] libmachine: (ha-763049-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 10:39:49.544615   22547 main.go:141] libmachine: (ha-763049-m02)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/ha-763049-m02.rawdisk'/>
	I0729 10:39:49.544631   22547 main.go:141] libmachine: (ha-763049-m02)       <target dev='hda' bus='virtio'/>
	I0729 10:39:49.544642   22547 main.go:141] libmachine: (ha-763049-m02)     </disk>
	I0729 10:39:49.544661   22547 main.go:141] libmachine: (ha-763049-m02)     <interface type='network'>
	I0729 10:39:49.544692   22547 main.go:141] libmachine: (ha-763049-m02)       <source network='mk-ha-763049'/>
	I0729 10:39:49.544704   22547 main.go:141] libmachine: (ha-763049-m02)       <model type='virtio'/>
	I0729 10:39:49.544723   22547 main.go:141] libmachine: (ha-763049-m02)     </interface>
	I0729 10:39:49.544744   22547 main.go:141] libmachine: (ha-763049-m02)     <interface type='network'>
	I0729 10:39:49.544756   22547 main.go:141] libmachine: (ha-763049-m02)       <source network='default'/>
	I0729 10:39:49.544767   22547 main.go:141] libmachine: (ha-763049-m02)       <model type='virtio'/>
	I0729 10:39:49.544785   22547 main.go:141] libmachine: (ha-763049-m02)     </interface>
	I0729 10:39:49.544810   22547 main.go:141] libmachine: (ha-763049-m02)     <serial type='pty'>
	I0729 10:39:49.544823   22547 main.go:141] libmachine: (ha-763049-m02)       <target port='0'/>
	I0729 10:39:49.544830   22547 main.go:141] libmachine: (ha-763049-m02)     </serial>
	I0729 10:39:49.544841   22547 main.go:141] libmachine: (ha-763049-m02)     <console type='pty'>
	I0729 10:39:49.544924   22547 main.go:141] libmachine: (ha-763049-m02)       <target type='serial' port='0'/>
	I0729 10:39:49.544971   22547 main.go:141] libmachine: (ha-763049-m02)     </console>
	I0729 10:39:49.544984   22547 main.go:141] libmachine: (ha-763049-m02)     <rng model='virtio'>
	I0729 10:39:49.544993   22547 main.go:141] libmachine: (ha-763049-m02)       <backend model='random'>/dev/random</backend>
	I0729 10:39:49.544998   22547 main.go:141] libmachine: (ha-763049-m02)     </rng>
	I0729 10:39:49.545004   22547 main.go:141] libmachine: (ha-763049-m02)     
	I0729 10:39:49.545009   22547 main.go:141] libmachine: (ha-763049-m02)     
	I0729 10:39:49.545016   22547 main.go:141] libmachine: (ha-763049-m02)   </devices>
	I0729 10:39:49.545022   22547 main.go:141] libmachine: (ha-763049-m02) </domain>
	I0729 10:39:49.545030   22547 main.go:141] libmachine: (ha-763049-m02) 
	I0729 10:39:49.552646   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:b4:66:7d in network default
	I0729 10:39:49.553493   22547 main.go:141] libmachine: (ha-763049-m02) Ensuring networks are active...
	I0729 10:39:49.553515   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:49.554254   22547 main.go:141] libmachine: (ha-763049-m02) Ensuring network default is active
	I0729 10:39:49.554624   22547 main.go:141] libmachine: (ha-763049-m02) Ensuring network mk-ha-763049 is active
	I0729 10:39:49.555013   22547 main.go:141] libmachine: (ha-763049-m02) Getting domain xml...
	I0729 10:39:49.555702   22547 main.go:141] libmachine: (ha-763049-m02) Creating domain...
	I0729 10:39:50.763183   22547 main.go:141] libmachine: (ha-763049-m02) Waiting to get IP...
	I0729 10:39:50.763960   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:50.764372   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:50.764399   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:50.764337   22945 retry.go:31] will retry after 256.083153ms: waiting for machine to come up
	I0729 10:39:51.021770   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:51.022192   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:51.022268   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:51.022169   22945 retry.go:31] will retry after 250.837815ms: waiting for machine to come up
	I0729 10:39:51.274592   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:51.275098   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:51.275128   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:51.275041   22945 retry.go:31] will retry after 336.627351ms: waiting for machine to come up
	I0729 10:39:51.613501   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:51.613936   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:51.613964   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:51.613892   22945 retry.go:31] will retry after 440.270957ms: waiting for machine to come up
	I0729 10:39:52.055499   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:52.055935   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:52.055970   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:52.055882   22945 retry.go:31] will retry after 625.822615ms: waiting for machine to come up
	I0729 10:39:52.683824   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:52.684295   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:52.684321   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:52.684252   22945 retry.go:31] will retry after 681.635336ms: waiting for machine to come up
	I0729 10:39:53.367191   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:53.367665   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:53.367715   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:53.367639   22945 retry.go:31] will retry after 904.805807ms: waiting for machine to come up
	I0729 10:39:54.274089   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:54.274530   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:54.274560   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:54.274470   22945 retry.go:31] will retry after 1.013356281s: waiting for machine to come up
	I0729 10:39:55.289617   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:55.290021   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:55.290041   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:55.289967   22945 retry.go:31] will retry after 1.217157419s: waiting for machine to come up
	I0729 10:39:56.508416   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:56.508746   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:56.508766   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:56.508703   22945 retry.go:31] will retry after 2.283747131s: waiting for machine to come up
	I0729 10:39:58.795274   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:39:58.795793   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:39:58.795820   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:39:58.795749   22945 retry.go:31] will retry after 2.363192954s: waiting for machine to come up
	I0729 10:40:01.160070   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:01.160516   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:40:01.160544   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:40:01.160462   22945 retry.go:31] will retry after 3.128051052s: waiting for machine to come up
	I0729 10:40:04.290282   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:04.290804   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:40:04.290826   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:40:04.290742   22945 retry.go:31] will retry after 3.748020631s: waiting for machine to come up
	I0729 10:40:08.041140   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:08.041486   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find current IP address of domain ha-763049-m02 in network mk-ha-763049
	I0729 10:40:08.041511   22547 main.go:141] libmachine: (ha-763049-m02) DBG | I0729 10:40:08.041446   22945 retry.go:31] will retry after 5.530915798s: waiting for machine to come up
	I0729 10:40:13.577470   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.577984   22547 main.go:141] libmachine: (ha-763049-m02) Found IP for machine: 192.168.39.39
	I0729 10:40:13.578013   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has current primary IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.578023   22547 main.go:141] libmachine: (ha-763049-m02) Reserving static IP address...
	I0729 10:40:13.578410   22547 main.go:141] libmachine: (ha-763049-m02) DBG | unable to find host DHCP lease matching {name: "ha-763049-m02", mac: "52:54:00:d3:91:e5", ip: "192.168.39.39"} in network mk-ha-763049
	I0729 10:40:13.652350   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Getting to WaitForSSH function...
	I0729 10:40:13.652378   22547 main.go:141] libmachine: (ha-763049-m02) Reserved static IP address: 192.168.39.39
	I0729 10:40:13.652416   22547 main.go:141] libmachine: (ha-763049-m02) Waiting for SSH to be available...
	I0729 10:40:13.655188   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.655588   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:13.655617   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.655808   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Using SSH client type: external
	I0729 10:40:13.655842   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa (-rw-------)
	I0729 10:40:13.655885   22547 main.go:141] libmachine: (ha-763049-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:40:13.655897   22547 main.go:141] libmachine: (ha-763049-m02) DBG | About to run SSH command:
	I0729 10:40:13.655916   22547 main.go:141] libmachine: (ha-763049-m02) DBG | exit 0
	I0729 10:40:13.787211   22547 main.go:141] libmachine: (ha-763049-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 10:40:13.787483   22547 main.go:141] libmachine: (ha-763049-m02) KVM machine creation complete!
	I0729 10:40:13.787810   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetConfigRaw
	I0729 10:40:13.788477   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:13.788687   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:13.788890   22547 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 10:40:13.788907   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:40:13.790123   22547 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 10:40:13.790139   22547 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 10:40:13.790146   22547 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 10:40:13.790154   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:13.792784   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.793200   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:13.793225   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.793465   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:13.793643   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:13.793830   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:13.794001   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:13.794172   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:13.794371   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:13.794382   22547 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 10:40:13.906181   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:40:13.906221   22547 main.go:141] libmachine: Detecting the provisioner...
	I0729 10:40:13.906231   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:13.908982   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.909389   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:13.909418   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:13.909554   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:13.909756   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:13.909905   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:13.910039   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:13.910178   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:13.910336   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:13.910346   22547 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 10:40:14.023378   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 10:40:14.023457   22547 main.go:141] libmachine: found compatible host: buildroot
	I0729 10:40:14.023473   22547 main.go:141] libmachine: Provisioning with buildroot...
	I0729 10:40:14.023483   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetMachineName
	I0729 10:40:14.023705   22547 buildroot.go:166] provisioning hostname "ha-763049-m02"
	I0729 10:40:14.023726   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetMachineName
	I0729 10:40:14.023937   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.026733   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.027077   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.027101   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.027235   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.027426   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.027593   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.027720   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.027896   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:14.028107   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:14.028124   22547 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-763049-m02 && echo "ha-763049-m02" | sudo tee /etc/hostname
	I0729 10:40:14.153617   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-763049-m02
	
	I0729 10:40:14.153650   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.156622   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.157064   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.157099   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.157259   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.157458   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.157623   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.157924   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.158092   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:14.158302   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:14.158321   22547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-763049-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-763049-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-763049-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:40:14.280573   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:40:14.280605   22547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 10:40:14.280658   22547 buildroot.go:174] setting up certificates
	I0729 10:40:14.280683   22547 provision.go:84] configureAuth start
	I0729 10:40:14.280701   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetMachineName
	I0729 10:40:14.280979   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:40:14.283489   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.283890   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.283917   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.284111   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.286590   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.286944   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.286987   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.287142   22547 provision.go:143] copyHostCerts
	I0729 10:40:14.287183   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:40:14.287223   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 10:40:14.287235   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:40:14.287307   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 10:40:14.287410   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:40:14.287434   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 10:40:14.287442   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:40:14.287484   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 10:40:14.287559   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:40:14.287581   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 10:40:14.287588   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:40:14.287625   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 10:40:14.287709   22547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.ha-763049-m02 san=[127.0.0.1 192.168.39.39 ha-763049-m02 localhost minikube]
	I0729 10:40:14.362963   22547 provision.go:177] copyRemoteCerts
	I0729 10:40:14.363020   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:40:14.363045   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.365626   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.365963   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.365991   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.366181   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.366373   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.366533   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.366659   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	I0729 10:40:14.453521   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 10:40:14.453593   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 10:40:14.479396   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 10:40:14.479470   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:40:14.505028   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 10:40:14.505093   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:40:14.528997   22547 provision.go:87] duration metric: took 248.298993ms to configureAuth
	I0729 10:40:14.529026   22547 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:40:14.529204   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:40:14.529286   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.531949   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.532260   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.532286   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.532426   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.532591   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.532748   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.532895   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.533051   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:14.533237   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:14.533255   22547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:40:14.801521   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:40:14.801548   22547 main.go:141] libmachine: Checking connection to Docker...
	I0729 10:40:14.801556   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetURL
	I0729 10:40:14.802764   22547 main.go:141] libmachine: (ha-763049-m02) DBG | Using libvirt version 6000000
	I0729 10:40:14.805815   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.806245   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.806273   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.806461   22547 main.go:141] libmachine: Docker is up and running!
	I0729 10:40:14.806479   22547 main.go:141] libmachine: Reticulating splines...
	I0729 10:40:14.806485   22547 client.go:171] duration metric: took 25.730528228s to LocalClient.Create
	I0729 10:40:14.806507   22547 start.go:167] duration metric: took 25.730587462s to libmachine.API.Create "ha-763049"
	I0729 10:40:14.806516   22547 start.go:293] postStartSetup for "ha-763049-m02" (driver="kvm2")
	I0729 10:40:14.806526   22547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:40:14.806546   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:14.806794   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:40:14.806821   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.809076   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.809441   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.809468   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.809581   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.809717   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.809839   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.810057   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	I0729 10:40:14.898192   22547 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:40:14.902565   22547 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:40:14.902588   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 10:40:14.902662   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 10:40:14.902769   22547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 10:40:14.902781   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /etc/ssl/certs/110642.pem
	I0729 10:40:14.902862   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:40:14.913944   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:40:14.940699   22547 start.go:296] duration metric: took 134.171196ms for postStartSetup
	I0729 10:40:14.940755   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetConfigRaw
	I0729 10:40:14.941327   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:40:14.943504   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.943820   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.943852   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.944057   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:40:14.944253   22547 start.go:128] duration metric: took 25.88602743s to createHost
	I0729 10:40:14.944279   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:14.946518   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.946819   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:14.946880   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:14.946983   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:14.947128   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.947281   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:14.947409   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:14.947555   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:40:14.947712   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0729 10:40:14.947723   22547 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:40:15.059704   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249615.037551185
	
	I0729 10:40:15.059735   22547 fix.go:216] guest clock: 1722249615.037551185
	I0729 10:40:15.059747   22547 fix.go:229] Guest: 2024-07-29 10:40:15.037551185 +0000 UTC Remote: 2024-07-29 10:40:14.944265521 +0000 UTC m=+82.900271025 (delta=93.285664ms)
	I0729 10:40:15.059771   22547 fix.go:200] guest clock delta is within tolerance: 93.285664ms
	I0729 10:40:15.059782   22547 start.go:83] releasing machines lock for "ha-763049-m02", held for 26.001645056s
	I0729 10:40:15.059809   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:15.060129   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:40:15.062589   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.062932   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:15.062964   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.065431   22547 out.go:177] * Found network options:
	I0729 10:40:15.066951   22547 out.go:177]   - NO_PROXY=192.168.39.68
	W0729 10:40:15.068109   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 10:40:15.068144   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:15.068738   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:15.068946   22547 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:40:15.069009   22547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:40:15.069049   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	W0729 10:40:15.069146   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 10:40:15.069224   22547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:40:15.069244   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:40:15.071950   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.072030   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.072308   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:15.072349   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.072376   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:15.072398   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:15.072497   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:15.072591   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:40:15.072675   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:15.072733   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:40:15.072792   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:15.072840   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:40:15.072996   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	I0729 10:40:15.072996   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	I0729 10:40:15.311326   22547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:40:15.317166   22547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:40:15.317235   22547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:40:15.333420   22547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:40:15.333442   22547 start.go:495] detecting cgroup driver to use...
	I0729 10:40:15.333499   22547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:40:15.349212   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:40:15.363556   22547 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:40:15.363621   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:40:15.377859   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:40:15.392260   22547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:40:15.508310   22547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:40:15.671262   22547 docker.go:233] disabling docker service ...
	I0729 10:40:15.671341   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:40:15.686239   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:40:15.699671   22547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:40:15.817382   22547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:40:15.944364   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:40:15.959074   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:40:15.979419   22547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:40:15.979485   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:15.990671   22547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:40:15.990761   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.001785   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.012564   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.022917   22547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:40:16.033413   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.043862   22547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.061875   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:40:16.073112   22547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:40:16.083230   22547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 10:40:16.083301   22547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 10:40:16.096536   22547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:40:16.107231   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:40:16.231158   22547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:40:16.371507   22547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:40:16.371590   22547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:40:16.377139   22547 start.go:563] Will wait 60s for crictl version
	I0729 10:40:16.377189   22547 ssh_runner.go:195] Run: which crictl
	I0729 10:40:16.381032   22547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:40:16.422442   22547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:40:16.422516   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:40:16.454744   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:40:16.484710   22547 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:40:16.486332   22547 out.go:177]   - env NO_PROXY=192.168.39.68
	I0729 10:40:16.487547   22547 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:40:16.490155   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:16.490479   22547 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:40:03 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:40:16.490515   22547 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:40:16.490693   22547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:40:16.494942   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:40:16.508235   22547 mustload.go:65] Loading cluster: ha-763049
	I0729 10:40:16.508453   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:40:16.508709   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:40:16.508735   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:40:16.523202   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0729 10:40:16.523600   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:40:16.524011   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:40:16.524045   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:40:16.524344   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:40:16.524515   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:40:16.525982   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:40:16.526340   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:40:16.526367   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:40:16.540874   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I0729 10:40:16.541337   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:40:16.541781   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:40:16.541803   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:40:16.542152   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:40:16.542331   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:40:16.542538   22547 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049 for IP: 192.168.39.39
	I0729 10:40:16.542548   22547 certs.go:194] generating shared ca certs ...
	I0729 10:40:16.542560   22547 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:40:16.542741   22547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 10:40:16.542794   22547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 10:40:16.542806   22547 certs.go:256] generating profile certs ...
	I0729 10:40:16.542920   22547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key
	I0729 10:40:16.542947   22547 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.498053eb
	I0729 10:40:16.542965   22547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.498053eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.39 192.168.39.254]
	I0729 10:40:16.776120   22547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.498053eb ...
	I0729 10:40:16.776148   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.498053eb: {Name:mk76f5031f273c03270902394a7378060388e576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:40:16.776337   22547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.498053eb ...
	I0729 10:40:16.776353   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.498053eb: {Name:mk219b6f38ef315c3e77e8846f51b55e50556b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:40:16.776445   22547 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.498053eb -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt
	I0729 10:40:16.776602   22547 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.498053eb -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key
	I0729 10:40:16.776772   22547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key
	I0729 10:40:16.776792   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 10:40:16.776811   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 10:40:16.776830   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 10:40:16.776853   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 10:40:16.776869   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 10:40:16.776880   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 10:40:16.776897   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 10:40:16.776915   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 10:40:16.776977   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 10:40:16.777022   22547 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 10:40:16.777035   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:40:16.777072   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:40:16.777102   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:40:16.777129   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 10:40:16.777183   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:40:16.777227   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /usr/share/ca-certificates/110642.pem
	I0729 10:40:16.777247   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:40:16.777264   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem -> /usr/share/ca-certificates/11064.pem
	I0729 10:40:16.777301   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:40:16.780314   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:40:16.780682   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:40:16.780702   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:40:16.780891   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:40:16.781095   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:40:16.781235   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:40:16.781477   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:40:16.855155   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 10:40:16.860509   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 10:40:16.872756   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 10:40:16.877009   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 10:40:16.888497   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 10:40:16.893180   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 10:40:16.904543   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 10:40:16.909594   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 10:40:16.921641   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 10:40:16.926380   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 10:40:16.939815   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 10:40:16.944530   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 10:40:16.957272   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:40:16.983699   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:40:17.015306   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:40:17.039317   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:40:17.064417   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 10:40:17.089285   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 10:40:17.113583   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:40:17.138338   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:40:17.162527   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 10:40:17.186613   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:40:17.210724   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 10:40:17.235005   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 10:40:17.251740   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 10:40:17.269133   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 10:40:17.285894   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 10:40:17.302454   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 10:40:17.320009   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 10:40:17.336669   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 10:40:17.354521   22547 ssh_runner.go:195] Run: openssl version
	I0729 10:40:17.360601   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 10:40:17.372110   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 10:40:17.376682   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 10:40:17.376740   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 10:40:17.382734   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:40:17.394077   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:40:17.405356   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:40:17.409859   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:40:17.409922   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:40:17.415664   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:40:17.426940   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 10:40:17.438352   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 10:40:17.442890   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 10:40:17.442953   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 10:40:17.448872   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 10:40:17.460242   22547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:40:17.464460   22547 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 10:40:17.464510   22547 kubeadm.go:934] updating node {m02 192.168.39.39 8443 v1.30.3 crio true true} ...
	I0729 10:40:17.464598   22547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-763049-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:40:17.464628   22547 kube-vip.go:115] generating kube-vip config ...
	I0729 10:40:17.464679   22547 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 10:40:17.483432   22547 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 10:40:17.483502   22547 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 10:40:17.483572   22547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:40:17.494014   22547 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 10:40:17.494085   22547 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 10:40:17.504126   22547 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 10:40:17.504154   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 10:40:17.504157   22547 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 10:40:17.504226   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 10:40:17.504164   22547 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 10:40:17.508598   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 10:40:17.508624   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 10:40:26.588374   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:40:26.605887   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 10:40:26.605979   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 10:40:26.610585   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 10:40:26.610626   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 10:40:49.462433   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 10:40:49.462508   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 10:40:49.469293   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 10:40:49.469326   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 10:40:49.699988   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 10:40:49.709764   22547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 10:40:49.727350   22547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:40:49.746778   22547 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 10:40:49.765868   22547 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 10:40:49.770160   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:40:49.783283   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:40:49.897442   22547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:40:49.913547   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:40:49.913865   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:40:49.913898   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:40:49.929451   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0729 10:40:49.929930   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:40:49.930380   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:40:49.930401   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:40:49.930634   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:40:49.930830   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:40:49.931071   22547 start.go:317] joinCluster: &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:40:49.931184   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 10:40:49.931199   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:40:49.934458   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:40:49.934946   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:40:49.934972   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:40:49.935196   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:40:49.935349   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:40:49.935516   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:40:49.935649   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:40:50.096508   22547 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:40:50.096559   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xrik40.tjmw5hvghjzuo9u5 --discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-763049-m02 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443"
	I0729 10:41:13.101397   22547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xrik40.tjmw5hvghjzuo9u5 --discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-763049-m02 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443": (23.004815839s)
	I0729 10:41:13.101437   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 10:41:13.655844   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-763049-m02 minikube.k8s.io/updated_at=2024_07_29T10_41_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=ha-763049 minikube.k8s.io/primary=false
	I0729 10:41:13.789098   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-763049-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 10:41:13.916429   22547 start.go:319] duration metric: took 23.985355289s to joinCluster
	I0729 10:41:13.916505   22547 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:41:13.916779   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:41:13.918192   22547 out.go:177] * Verifying Kubernetes components...
	I0729 10:41:13.919632   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:41:14.203636   22547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:41:14.278045   22547 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:41:14.278505   22547 kapi.go:59] client config for ha-763049: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt", KeyFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key", CAFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 10:41:14.278584   22547 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.68:8443
	I0729 10:41:14.278875   22547 node_ready.go:35] waiting up to 6m0s for node "ha-763049-m02" to be "Ready" ...
	I0729 10:41:14.279004   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:14.279016   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:14.279028   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:14.279050   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:14.292508   22547 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 10:41:14.779193   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:14.779247   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:14.779259   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:14.779266   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:14.785629   22547 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 10:41:15.279581   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:15.279612   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:15.279622   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:15.279627   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:15.286025   22547 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 10:41:15.780020   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:15.780048   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:15.780059   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:15.780064   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:15.787539   22547 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 10:41:16.279395   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:16.279417   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:16.279425   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:16.279430   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:16.309331   22547 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0729 10:41:16.309871   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:16.779417   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:16.779440   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:16.779447   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:16.779451   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:16.783431   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:17.279414   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:17.279452   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:17.279463   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:17.279469   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:17.282994   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:17.780018   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:17.780044   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:17.780055   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:17.780061   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:17.783842   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:18.279519   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:18.279540   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:18.279548   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:18.279553   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:18.282922   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:18.779287   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:18.779307   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:18.779315   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:18.779319   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:18.782547   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:18.783444   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:19.279506   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:19.279536   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:19.279547   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:19.279551   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:19.283370   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:19.779049   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:19.779069   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:19.779083   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:19.779088   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:19.782314   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:20.279894   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:20.279917   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:20.279926   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:20.279930   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:20.283851   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:20.779982   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:20.780006   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:20.780018   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:20.780023   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:20.783876   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:20.784771   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:21.279958   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:21.279978   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:21.279987   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:21.279991   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:21.283232   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:21.779188   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:21.779209   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:21.779218   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:21.779221   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:21.782839   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:22.280052   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:22.280073   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:22.280080   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:22.280083   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:22.288062   22547 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 10:41:22.779995   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:22.780022   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:22.780034   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:22.780041   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:22.784016   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:22.784887   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:23.279265   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:23.279286   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:23.279294   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:23.279299   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:23.282941   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:23.779940   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:23.779962   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:23.779973   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:23.779978   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:23.783848   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:24.279543   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:24.279565   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:24.279573   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:24.279579   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:24.283010   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:24.780070   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:24.780092   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:24.780102   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:24.780107   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:24.783959   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:24.785130   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:25.279937   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:25.279959   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:25.279966   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:25.279971   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:25.283205   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:25.779232   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:25.779254   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:25.779262   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:25.779265   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:25.783726   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:26.279902   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:26.279923   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:26.279930   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:26.279934   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:26.284077   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:26.779923   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:26.779944   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:26.779952   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:26.779956   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:26.783347   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:27.279113   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:27.279135   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:27.279142   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:27.279148   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:27.282895   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:27.283912   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:27.779949   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:27.779971   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:27.779979   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:27.779984   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:27.783783   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:28.279736   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:28.279762   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:28.279772   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:28.279777   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:28.283208   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:28.779398   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:28.779426   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:28.779437   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:28.779443   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:28.782615   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:29.279939   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:29.279967   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:29.279977   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:29.279984   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:29.285806   22547 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 10:41:29.286535   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:29.779871   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:29.779893   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:29.779902   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:29.779906   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:29.784006   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:30.279363   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:30.279386   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:30.279395   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:30.279400   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:30.283478   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:30.779957   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:30.779985   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:30.779995   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:30.780001   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:30.783212   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:31.279254   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:31.279276   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:31.279283   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:31.279289   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:31.283909   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:31.780011   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:31.780037   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:31.780048   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:31.780055   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:31.783632   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:31.784159   22547 node_ready.go:53] node "ha-763049-m02" has status "Ready":"False"
	I0729 10:41:32.279389   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:32.279409   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.279416   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.279422   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.283068   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:32.779139   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:32.779160   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.779168   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.779173   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.782381   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:32.783081   22547 node_ready.go:49] node "ha-763049-m02" has status "Ready":"True"
	I0729 10:41:32.783106   22547 node_ready.go:38] duration metric: took 18.5041845s for node "ha-763049-m02" to be "Ready" ...
	I0729 10:41:32.783115   22547 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:41:32.783183   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:41:32.783193   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.783200   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.783203   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.787652   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:32.793437   22547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.793505   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-l4n5p
	I0729 10:41:32.793510   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.793517   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.793522   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.796630   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:32.797283   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:32.797297   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.797303   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.797307   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.801151   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:32.801632   22547 pod_ready.go:92] pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:32.801649   22547 pod_ready.go:81] duration metric: took 8.190342ms for pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.801657   22547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.801706   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xxwnd
	I0729 10:41:32.801713   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.801720   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.801723   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.806250   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:32.806896   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:32.806909   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.806914   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.806920   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.810624   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:32.811138   22547 pod_ready.go:92] pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:32.811154   22547 pod_ready.go:81] duration metric: took 9.491176ms for pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.811162   22547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.811205   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049
	I0729 10:41:32.811212   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.811218   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.811222   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.813570   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:41:32.814298   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:32.814312   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.814319   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.814324   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.816372   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:41:32.816861   22547 pod_ready.go:92] pod "etcd-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:32.816879   22547 pod_ready.go:81] duration metric: took 5.711324ms for pod "etcd-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.816887   22547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.816932   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049-m02
	I0729 10:41:32.816939   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.816951   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.816958   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.819067   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:41:32.819638   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:32.819653   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.819659   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.819663   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.821529   22547 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 10:41:32.822180   22547 pod_ready.go:92] pod "etcd-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:32.822199   22547 pod_ready.go:81] duration metric: took 5.30456ms for pod "etcd-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.822217   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:32.979579   22547 request.go:629] Waited for 157.311219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049
	I0729 10:41:32.979644   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049
	I0729 10:41:32.979651   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:32.979661   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:32.979669   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:32.983237   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:33.180170   22547 request.go:629] Waited for 196.360554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:33.180246   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:33.180254   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:33.180262   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:33.180270   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:33.183720   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:33.184318   22547 pod_ready.go:92] pod "kube-apiserver-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:33.184336   22547 pod_ready.go:81] duration metric: took 362.111868ms for pod "kube-apiserver-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:33.184344   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:33.379539   22547 request.go:629] Waited for 195.133783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m02
	I0729 10:41:33.379612   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m02
	I0729 10:41:33.379618   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:33.379629   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:33.379636   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:33.382783   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:33.579862   22547 request.go:629] Waited for 196.249038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:33.579920   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:33.579925   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:33.579932   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:33.579935   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:33.583313   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:33.583929   22547 pod_ready.go:92] pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:33.583949   22547 pod_ready.go:81] duration metric: took 399.596683ms for pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:33.583962   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:33.780112   22547 request.go:629] Waited for 196.083438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049
	I0729 10:41:33.780175   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049
	I0729 10:41:33.780180   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:33.780190   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:33.780195   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:33.784281   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:41:33.979323   22547 request.go:629] Waited for 194.303521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:33.979387   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:33.979394   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:33.979405   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:33.979413   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:33.982854   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:33.983410   22547 pod_ready.go:92] pod "kube-controller-manager-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:33.983427   22547 pod_ready.go:81] duration metric: took 399.458344ms for pod "kube-controller-manager-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:33.983436   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:34.180000   22547 request.go:629] Waited for 196.505232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m02
	I0729 10:41:34.180055   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m02
	I0729 10:41:34.180060   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:34.180068   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:34.180072   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:34.183283   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:34.379207   22547 request.go:629] Waited for 195.29513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:34.379270   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:34.379275   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:34.379283   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:34.379286   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:34.382256   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:41:34.382826   22547 pod_ready.go:92] pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:34.382850   22547 pod_ready.go:81] duration metric: took 399.403885ms for pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:34.382862   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mhbk7" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:34.579871   22547 request.go:629] Waited for 196.931891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhbk7
	I0729 10:41:34.579939   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhbk7
	I0729 10:41:34.579946   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:34.579957   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:34.579969   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:34.583394   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:34.779620   22547 request.go:629] Waited for 195.368999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:34.779699   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:34.779710   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:34.779720   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:34.779726   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:34.782917   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:34.783536   22547 pod_ready.go:92] pod "kube-proxy-mhbk7" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:34.783553   22547 pod_ready.go:81] duration metric: took 400.684572ms for pod "kube-proxy-mhbk7" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:34.783562   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tf7wt" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:34.980174   22547 request.go:629] Waited for 196.526855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf7wt
	I0729 10:41:34.980233   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf7wt
	I0729 10:41:34.980239   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:34.980246   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:34.980251   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:34.983694   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.179724   22547 request.go:629] Waited for 195.358019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:35.179793   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:35.179798   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:35.179805   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:35.179809   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:35.182952   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.183581   22547 pod_ready.go:92] pod "kube-proxy-tf7wt" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:35.183598   22547 pod_ready.go:81] duration metric: took 400.030612ms for pod "kube-proxy-tf7wt" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:35.183607   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:35.379805   22547 request.go:629] Waited for 196.143402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049
	I0729 10:41:35.379888   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049
	I0729 10:41:35.379898   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:35.379911   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:35.379935   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:35.383257   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.579240   22547 request.go:629] Waited for 195.285053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:35.579312   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:41:35.579318   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:35.579329   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:35.579337   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:35.582755   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.583460   22547 pod_ready.go:92] pod "kube-scheduler-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:35.583484   22547 pod_ready.go:81] duration metric: took 399.871989ms for pod "kube-scheduler-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:35.583493   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:35.779637   22547 request.go:629] Waited for 196.083393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m02
	I0729 10:41:35.779725   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m02
	I0729 10:41:35.779733   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:35.779745   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:35.779758   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:35.782813   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.979984   22547 request.go:629] Waited for 196.384051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:35.980055   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:41:35.980063   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:35.980073   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:35.980081   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:35.983518   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:35.983938   22547 pod_ready.go:92] pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:41:35.983954   22547 pod_ready.go:81] duration metric: took 400.455357ms for pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:41:35.983965   22547 pod_ready.go:38] duration metric: took 3.200839818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:41:35.983985   22547 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:41:35.984030   22547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:41:36.000253   22547 api_server.go:72] duration metric: took 22.083700677s to wait for apiserver process to appear ...
	I0729 10:41:36.000278   22547 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:41:36.000301   22547 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0729 10:41:36.006393   22547 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0729 10:41:36.006457   22547 round_trippers.go:463] GET https://192.168.39.68:8443/version
	I0729 10:41:36.006464   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:36.006472   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:36.006477   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:36.007373   22547 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 10:41:36.007472   22547 api_server.go:141] control plane version: v1.30.3
	I0729 10:41:36.007486   22547 api_server.go:131] duration metric: took 7.203302ms to wait for apiserver health ...
	I0729 10:41:36.007493   22547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 10:41:36.179890   22547 request.go:629] Waited for 172.32872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:41:36.179939   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:41:36.179945   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:36.179954   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:36.179961   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:36.185713   22547 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 10:41:36.189832   22547 system_pods.go:59] 17 kube-system pods found
	I0729 10:41:36.189861   22547 system_pods.go:61] "coredns-7db6d8ff4d-l4n5p" [d8f32893-3406-4eed-990f-f490efab94d6] Running
	I0729 10:41:36.189866   22547 system_pods.go:61] "coredns-7db6d8ff4d-xxwnd" [76efda45-4871-46fb-8a27-2e94f75de9f4] Running
	I0729 10:41:36.189874   22547 system_pods.go:61] "etcd-ha-763049" [b16c09bc-d43b-4593-ae0c-a2b9f3500d64] Running
	I0729 10:41:36.189877   22547 system_pods.go:61] "etcd-ha-763049-m02" [8c207076-2477-4f60-8957-3a25961c47ae] Running
	I0729 10:41:36.189881   22547 system_pods.go:61] "kindnet-596ll" [19026f3d-836d-499b-91fa-e1a957cbad76] Running
	I0729 10:41:36.189885   22547 system_pods.go:61] "kindnet-fdmh5" [4ed222fa-9517-42bb-bbde-6632f91bda05] Running
	I0729 10:41:36.189890   22547 system_pods.go:61] "kube-apiserver-ha-763049" [a6f83bf6-9aab-4141-8f71-b5080c69a2f5] Running
	I0729 10:41:36.189893   22547 system_pods.go:61] "kube-apiserver-ha-763049-m02" [541ec415-7d3c-4cc8-a357-89b8f58fedb4] Running
	I0729 10:41:36.189897   22547 system_pods.go:61] "kube-controller-manager-ha-763049" [0418a717-be1c-49d8-bf44-b702209730e1] Running
	I0729 10:41:36.189902   22547 system_pods.go:61] "kube-controller-manager-ha-763049-m02" [249b8e43-6745-4ddc-844b-901568a9a8b6] Running
	I0729 10:41:36.189905   22547 system_pods.go:61] "kube-proxy-mhbk7" [b05b91ac-ef64-4bd2-9824-83723bddfef7] Running
	I0729 10:41:36.189908   22547 system_pods.go:61] "kube-proxy-tf7wt" [c6875d82-c011-4b56-8b51-0ec9f8ddb78a] Running
	I0729 10:41:36.189911   22547 system_pods.go:61] "kube-scheduler-ha-763049" [35288476-ae47-4955-bc9e-bdcf14642347] Running
	I0729 10:41:36.189914   22547 system_pods.go:61] "kube-scheduler-ha-763049-m02" [46df5d86-319e-4552-a864-802abcaf1376] Running
	I0729 10:41:36.189917   22547 system_pods.go:61] "kube-vip-ha-763049" [5f88bfd4-d887-4989-bf71-7a4459aa6655] Running
	I0729 10:41:36.189920   22547 system_pods.go:61] "kube-vip-ha-763049-m02" [f3ffc4c1-73a9-437c-87b2-b580550d4726] Running
	I0729 10:41:36.189925   22547 system_pods.go:61] "storage-provisioner" [d48db391-d5bb-4974-88d7-f5c71e3edb4a] Running
	I0729 10:41:36.189931   22547 system_pods.go:74] duration metric: took 182.432433ms to wait for pod list to return data ...
	I0729 10:41:36.189941   22547 default_sa.go:34] waiting for default service account to be created ...
	I0729 10:41:36.379320   22547 request.go:629] Waited for 189.29136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0729 10:41:36.379382   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0729 10:41:36.379387   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:36.379394   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:36.379397   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:36.382955   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:36.383287   22547 default_sa.go:45] found service account: "default"
	I0729 10:41:36.383305   22547 default_sa.go:55] duration metric: took 193.358744ms for default service account to be created ...
	I0729 10:41:36.383314   22547 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 10:41:36.580150   22547 request.go:629] Waited for 196.780261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:41:36.580216   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:41:36.580222   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:36.580229   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:36.580241   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:36.586303   22547 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 10:41:36.592134   22547 system_pods.go:86] 17 kube-system pods found
	I0729 10:41:36.592164   22547 system_pods.go:89] "coredns-7db6d8ff4d-l4n5p" [d8f32893-3406-4eed-990f-f490efab94d6] Running
	I0729 10:41:36.592172   22547 system_pods.go:89] "coredns-7db6d8ff4d-xxwnd" [76efda45-4871-46fb-8a27-2e94f75de9f4] Running
	I0729 10:41:36.592179   22547 system_pods.go:89] "etcd-ha-763049" [b16c09bc-d43b-4593-ae0c-a2b9f3500d64] Running
	I0729 10:41:36.592185   22547 system_pods.go:89] "etcd-ha-763049-m02" [8c207076-2477-4f60-8957-3a25961c47ae] Running
	I0729 10:41:36.592190   22547 system_pods.go:89] "kindnet-596ll" [19026f3d-836d-499b-91fa-e1a957cbad76] Running
	I0729 10:41:36.592196   22547 system_pods.go:89] "kindnet-fdmh5" [4ed222fa-9517-42bb-bbde-6632f91bda05] Running
	I0729 10:41:36.592201   22547 system_pods.go:89] "kube-apiserver-ha-763049" [a6f83bf6-9aab-4141-8f71-b5080c69a2f5] Running
	I0729 10:41:36.592207   22547 system_pods.go:89] "kube-apiserver-ha-763049-m02" [541ec415-7d3c-4cc8-a357-89b8f58fedb4] Running
	I0729 10:41:36.592213   22547 system_pods.go:89] "kube-controller-manager-ha-763049" [0418a717-be1c-49d8-bf44-b702209730e1] Running
	I0729 10:41:36.592219   22547 system_pods.go:89] "kube-controller-manager-ha-763049-m02" [249b8e43-6745-4ddc-844b-901568a9a8b6] Running
	I0729 10:41:36.592225   22547 system_pods.go:89] "kube-proxy-mhbk7" [b05b91ac-ef64-4bd2-9824-83723bddfef7] Running
	I0729 10:41:36.592230   22547 system_pods.go:89] "kube-proxy-tf7wt" [c6875d82-c011-4b56-8b51-0ec9f8ddb78a] Running
	I0729 10:41:36.592236   22547 system_pods.go:89] "kube-scheduler-ha-763049" [35288476-ae47-4955-bc9e-bdcf14642347] Running
	I0729 10:41:36.592245   22547 system_pods.go:89] "kube-scheduler-ha-763049-m02" [46df5d86-319e-4552-a864-802abcaf1376] Running
	I0729 10:41:36.592252   22547 system_pods.go:89] "kube-vip-ha-763049" [5f88bfd4-d887-4989-bf71-7a4459aa6655] Running
	I0729 10:41:36.592259   22547 system_pods.go:89] "kube-vip-ha-763049-m02" [f3ffc4c1-73a9-437c-87b2-b580550d4726] Running
	I0729 10:41:36.592264   22547 system_pods.go:89] "storage-provisioner" [d48db391-d5bb-4974-88d7-f5c71e3edb4a] Running
	I0729 10:41:36.592273   22547 system_pods.go:126] duration metric: took 208.951852ms to wait for k8s-apps to be running ...
	I0729 10:41:36.592285   22547 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 10:41:36.592333   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:41:36.609939   22547 system_svc.go:56] duration metric: took 17.644955ms WaitForService to wait for kubelet
	I0729 10:41:36.609971   22547 kubeadm.go:582] duration metric: took 22.693430585s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:41:36.610000   22547 node_conditions.go:102] verifying NodePressure condition ...
	I0729 10:41:36.779348   22547 request.go:629] Waited for 169.275297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes
	I0729 10:41:36.779426   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes
	I0729 10:41:36.779436   22547 round_trippers.go:469] Request Headers:
	I0729 10:41:36.779445   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:41:36.779452   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:41:36.782874   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:41:36.783823   22547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:41:36.783851   22547 node_conditions.go:123] node cpu capacity is 2
	I0729 10:41:36.783864   22547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:41:36.783877   22547 node_conditions.go:123] node cpu capacity is 2
	I0729 10:41:36.783883   22547 node_conditions.go:105] duration metric: took 173.878271ms to run NodePressure ...
	I0729 10:41:36.783897   22547 start.go:241] waiting for startup goroutines ...
	I0729 10:41:36.783930   22547 start.go:255] writing updated cluster config ...
	I0729 10:41:36.786047   22547 out.go:177] 
	I0729 10:41:36.787598   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:41:36.787683   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:41:36.790645   22547 out.go:177] * Starting "ha-763049-m03" control-plane node in "ha-763049" cluster
	I0729 10:41:36.791960   22547 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:41:36.791995   22547 cache.go:56] Caching tarball of preloaded images
	I0729 10:41:36.792114   22547 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:41:36.792128   22547 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:41:36.792257   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:41:36.792456   22547 start.go:360] acquireMachinesLock for ha-763049-m03: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:41:36.792523   22547 start.go:364] duration metric: took 42.732µs to acquireMachinesLock for "ha-763049-m03"
	I0729 10:41:36.792551   22547 start.go:93] Provisioning new machine with config: &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:41:36.792669   22547 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 10:41:36.795151   22547 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:41:36.795244   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:41:36.795279   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:41:36.810095   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0729 10:41:36.810570   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:41:36.811038   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:41:36.811058   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:41:36.811432   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:41:36.811594   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetMachineName
	I0729 10:41:36.811756   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:41:36.811971   22547 start.go:159] libmachine.API.Create for "ha-763049" (driver="kvm2")
	I0729 10:41:36.812002   22547 client.go:168] LocalClient.Create starting
	I0729 10:41:36.812037   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 10:41:36.812085   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:41:36.812099   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:41:36.812161   22547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 10:41:36.812187   22547 main.go:141] libmachine: Decoding PEM data...
	I0729 10:41:36.812202   22547 main.go:141] libmachine: Parsing certificate...
	I0729 10:41:36.812227   22547 main.go:141] libmachine: Running pre-create checks...
	I0729 10:41:36.812238   22547 main.go:141] libmachine: (ha-763049-m03) Calling .PreCreateCheck
	I0729 10:41:36.812408   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetConfigRaw
	I0729 10:41:36.812783   22547 main.go:141] libmachine: Creating machine...
	I0729 10:41:36.812797   22547 main.go:141] libmachine: (ha-763049-m03) Calling .Create
	I0729 10:41:36.812916   22547 main.go:141] libmachine: (ha-763049-m03) Creating KVM machine...
	I0729 10:41:36.814361   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found existing default KVM network
	I0729 10:41:36.814518   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found existing private KVM network mk-ha-763049
	I0729 10:41:36.814672   22547 main.go:141] libmachine: (ha-763049-m03) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03 ...
	I0729 10:41:36.814715   22547 main.go:141] libmachine: (ha-763049-m03) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:41:36.814792   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:36.814657   23477 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:41:36.814878   22547 main.go:141] libmachine: (ha-763049-m03) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 10:41:37.038880   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:37.038752   23477 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa...
	I0729 10:41:37.320257   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:37.320103   23477 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/ha-763049-m03.rawdisk...
	I0729 10:41:37.320296   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Writing magic tar header
	I0729 10:41:37.320311   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Writing SSH key tar header
	I0729 10:41:37.320324   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:37.320245   23477 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03 ...
	I0729 10:41:37.320397   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03
	I0729 10:41:37.320428   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 10:41:37.320442   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03 (perms=drwx------)
	I0729 10:41:37.320456   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 10:41:37.320467   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 10:41:37.320476   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 10:41:37.320489   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 10:41:37.320507   22547 main.go:141] libmachine: (ha-763049-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 10:41:37.320518   22547 main.go:141] libmachine: (ha-763049-m03) Creating domain...
	I0729 10:41:37.320528   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:41:37.320540   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 10:41:37.320547   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 10:41:37.320555   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 10:41:37.320567   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Checking permissions on dir: /home
	I0729 10:41:37.320579   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Skipping /home - not owner
	I0729 10:41:37.321448   22547 main.go:141] libmachine: (ha-763049-m03) define libvirt domain using xml: 
	I0729 10:41:37.321481   22547 main.go:141] libmachine: (ha-763049-m03) <domain type='kvm'>
	I0729 10:41:37.321517   22547 main.go:141] libmachine: (ha-763049-m03)   <name>ha-763049-m03</name>
	I0729 10:41:37.321539   22547 main.go:141] libmachine: (ha-763049-m03)   <memory unit='MiB'>2200</memory>
	I0729 10:41:37.321549   22547 main.go:141] libmachine: (ha-763049-m03)   <vcpu>2</vcpu>
	I0729 10:41:37.321560   22547 main.go:141] libmachine: (ha-763049-m03)   <features>
	I0729 10:41:37.321571   22547 main.go:141] libmachine: (ha-763049-m03)     <acpi/>
	I0729 10:41:37.321580   22547 main.go:141] libmachine: (ha-763049-m03)     <apic/>
	I0729 10:41:37.321588   22547 main.go:141] libmachine: (ha-763049-m03)     <pae/>
	I0729 10:41:37.321597   22547 main.go:141] libmachine: (ha-763049-m03)     
	I0729 10:41:37.321606   22547 main.go:141] libmachine: (ha-763049-m03)   </features>
	I0729 10:41:37.321621   22547 main.go:141] libmachine: (ha-763049-m03)   <cpu mode='host-passthrough'>
	I0729 10:41:37.321632   22547 main.go:141] libmachine: (ha-763049-m03)   
	I0729 10:41:37.321642   22547 main.go:141] libmachine: (ha-763049-m03)   </cpu>
	I0729 10:41:37.321650   22547 main.go:141] libmachine: (ha-763049-m03)   <os>
	I0729 10:41:37.321660   22547 main.go:141] libmachine: (ha-763049-m03)     <type>hvm</type>
	I0729 10:41:37.321669   22547 main.go:141] libmachine: (ha-763049-m03)     <boot dev='cdrom'/>
	I0729 10:41:37.321678   22547 main.go:141] libmachine: (ha-763049-m03)     <boot dev='hd'/>
	I0729 10:41:37.321701   22547 main.go:141] libmachine: (ha-763049-m03)     <bootmenu enable='no'/>
	I0729 10:41:37.321717   22547 main.go:141] libmachine: (ha-763049-m03)   </os>
	I0729 10:41:37.321729   22547 main.go:141] libmachine: (ha-763049-m03)   <devices>
	I0729 10:41:37.321741   22547 main.go:141] libmachine: (ha-763049-m03)     <disk type='file' device='cdrom'>
	I0729 10:41:37.321752   22547 main.go:141] libmachine: (ha-763049-m03)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/boot2docker.iso'/>
	I0729 10:41:37.321758   22547 main.go:141] libmachine: (ha-763049-m03)       <target dev='hdc' bus='scsi'/>
	I0729 10:41:37.321763   22547 main.go:141] libmachine: (ha-763049-m03)       <readonly/>
	I0729 10:41:37.321767   22547 main.go:141] libmachine: (ha-763049-m03)     </disk>
	I0729 10:41:37.321775   22547 main.go:141] libmachine: (ha-763049-m03)     <disk type='file' device='disk'>
	I0729 10:41:37.321781   22547 main.go:141] libmachine: (ha-763049-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 10:41:37.321801   22547 main.go:141] libmachine: (ha-763049-m03)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/ha-763049-m03.rawdisk'/>
	I0729 10:41:37.321821   22547 main.go:141] libmachine: (ha-763049-m03)       <target dev='hda' bus='virtio'/>
	I0729 10:41:37.321830   22547 main.go:141] libmachine: (ha-763049-m03)     </disk>
	I0729 10:41:37.321846   22547 main.go:141] libmachine: (ha-763049-m03)     <interface type='network'>
	I0729 10:41:37.321862   22547 main.go:141] libmachine: (ha-763049-m03)       <source network='mk-ha-763049'/>
	I0729 10:41:37.321873   22547 main.go:141] libmachine: (ha-763049-m03)       <model type='virtio'/>
	I0729 10:41:37.321885   22547 main.go:141] libmachine: (ha-763049-m03)     </interface>
	I0729 10:41:37.321893   22547 main.go:141] libmachine: (ha-763049-m03)     <interface type='network'>
	I0729 10:41:37.321908   22547 main.go:141] libmachine: (ha-763049-m03)       <source network='default'/>
	I0729 10:41:37.321917   22547 main.go:141] libmachine: (ha-763049-m03)       <model type='virtio'/>
	I0729 10:41:37.321925   22547 main.go:141] libmachine: (ha-763049-m03)     </interface>
	I0729 10:41:37.321932   22547 main.go:141] libmachine: (ha-763049-m03)     <serial type='pty'>
	I0729 10:41:37.321941   22547 main.go:141] libmachine: (ha-763049-m03)       <target port='0'/>
	I0729 10:41:37.321950   22547 main.go:141] libmachine: (ha-763049-m03)     </serial>
	I0729 10:41:37.321961   22547 main.go:141] libmachine: (ha-763049-m03)     <console type='pty'>
	I0729 10:41:37.321976   22547 main.go:141] libmachine: (ha-763049-m03)       <target type='serial' port='0'/>
	I0729 10:41:37.321987   22547 main.go:141] libmachine: (ha-763049-m03)     </console>
	I0729 10:41:37.321995   22547 main.go:141] libmachine: (ha-763049-m03)     <rng model='virtio'>
	I0729 10:41:37.322008   22547 main.go:141] libmachine: (ha-763049-m03)       <backend model='random'>/dev/random</backend>
	I0729 10:41:37.322016   22547 main.go:141] libmachine: (ha-763049-m03)     </rng>
	I0729 10:41:37.322023   22547 main.go:141] libmachine: (ha-763049-m03)     
	I0729 10:41:37.322031   22547 main.go:141] libmachine: (ha-763049-m03)     
	I0729 10:41:37.322044   22547 main.go:141] libmachine: (ha-763049-m03)   </devices>
	I0729 10:41:37.322058   22547 main.go:141] libmachine: (ha-763049-m03) </domain>
	I0729 10:41:37.322071   22547 main.go:141] libmachine: (ha-763049-m03) 
	I0729 10:41:37.328821   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:cc:8d:a0 in network default
	I0729 10:41:37.329372   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:37.329389   22547 main.go:141] libmachine: (ha-763049-m03) Ensuring networks are active...
	I0729 10:41:37.330107   22547 main.go:141] libmachine: (ha-763049-m03) Ensuring network default is active
	I0729 10:41:37.330478   22547 main.go:141] libmachine: (ha-763049-m03) Ensuring network mk-ha-763049 is active
	I0729 10:41:37.330893   22547 main.go:141] libmachine: (ha-763049-m03) Getting domain xml...
	I0729 10:41:37.331525   22547 main.go:141] libmachine: (ha-763049-m03) Creating domain...
	I0729 10:41:38.572522   22547 main.go:141] libmachine: (ha-763049-m03) Waiting to get IP...
	I0729 10:41:38.573255   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:38.573642   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:38.573666   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:38.573630   23477 retry.go:31] will retry after 283.776015ms: waiting for machine to come up
	I0729 10:41:38.859117   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:38.859615   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:38.859656   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:38.859584   23477 retry.go:31] will retry after 276.316276ms: waiting for machine to come up
	I0729 10:41:39.137149   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:39.137618   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:39.137646   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:39.137560   23477 retry.go:31] will retry after 374.250186ms: waiting for machine to come up
	I0729 10:41:39.513141   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:39.513645   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:39.513672   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:39.513596   23477 retry.go:31] will retry after 383.719849ms: waiting for machine to come up
	I0729 10:41:39.899203   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:39.899607   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:39.899630   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:39.899561   23477 retry.go:31] will retry after 613.157454ms: waiting for machine to come up
	I0729 10:41:40.514395   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:40.514823   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:40.514850   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:40.514776   23477 retry.go:31] will retry after 607.711486ms: waiting for machine to come up
	I0729 10:41:41.124558   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:41.125036   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:41.125057   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:41.124988   23477 retry.go:31] will retry after 770.107414ms: waiting for machine to come up
	I0729 10:41:41.896172   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:41.896509   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:41.896529   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:41.896488   23477 retry.go:31] will retry after 1.112790457s: waiting for machine to come up
	I0729 10:41:43.010762   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:43.011203   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:43.011231   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:43.011142   23477 retry.go:31] will retry after 1.188759429s: waiting for machine to come up
	I0729 10:41:44.201555   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:44.202020   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:44.202045   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:44.201959   23477 retry.go:31] will retry after 2.128868743s: waiting for machine to come up
	I0729 10:41:46.332974   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:46.333469   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:46.333489   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:46.333424   23477 retry.go:31] will retry after 2.338540862s: waiting for machine to come up
	I0729 10:41:48.674543   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:48.675063   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:48.675092   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:48.674985   23477 retry.go:31] will retry after 2.825286266s: waiting for machine to come up
	I0729 10:41:51.503884   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:51.504275   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:51.504303   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:51.504226   23477 retry.go:31] will retry after 3.995808267s: waiting for machine to come up
	I0729 10:41:55.503905   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:41:55.504276   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find current IP address of domain ha-763049-m03 in network mk-ha-763049
	I0729 10:41:55.504303   22547 main.go:141] libmachine: (ha-763049-m03) DBG | I0729 10:41:55.504232   23477 retry.go:31] will retry after 5.274642694s: waiting for machine to come up
	I0729 10:42:00.783710   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.784124   22547 main.go:141] libmachine: (ha-763049-m03) Found IP for machine: 192.168.39.123
	I0729 10:42:00.784143   22547 main.go:141] libmachine: (ha-763049-m03) Reserving static IP address...
	I0729 10:42:00.784156   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has current primary IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.784616   22547 main.go:141] libmachine: (ha-763049-m03) DBG | unable to find host DHCP lease matching {name: "ha-763049-m03", mac: "52:54:00:91:4b:ad", ip: "192.168.39.123"} in network mk-ha-763049
	I0729 10:42:00.859558   22547 main.go:141] libmachine: (ha-763049-m03) Reserved static IP address: 192.168.39.123
	I0729 10:42:00.859588   22547 main.go:141] libmachine: (ha-763049-m03) Waiting for SSH to be available...
	I0729 10:42:00.859603   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Getting to WaitForSSH function...
	I0729 10:42:00.862471   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.862925   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:00.862956   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.863191   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Using SSH client type: external
	I0729 10:42:00.863223   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa (-rw-------)
	I0729 10:42:00.863256   22547 main.go:141] libmachine: (ha-763049-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:42:00.863270   22547 main.go:141] libmachine: (ha-763049-m03) DBG | About to run SSH command:
	I0729 10:42:00.863288   22547 main.go:141] libmachine: (ha-763049-m03) DBG | exit 0
	I0729 10:42:00.986936   22547 main.go:141] libmachine: (ha-763049-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 10:42:00.987237   22547 main.go:141] libmachine: (ha-763049-m03) KVM machine creation complete!
	I0729 10:42:00.987562   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetConfigRaw
	I0729 10:42:00.988120   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:00.988380   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:00.988530   22547 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 10:42:00.988544   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:42:00.989877   22547 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 10:42:00.989894   22547 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 10:42:00.989901   22547 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 10:42:00.989907   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:00.992192   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.992695   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:00.992722   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:00.992932   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:00.993137   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:00.993286   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:00.993404   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:00.993541   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:00.993737   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:00.993748   22547 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 10:42:01.098289   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:42:01.098311   22547 main.go:141] libmachine: Detecting the provisioner...
	I0729 10:42:01.098319   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.101054   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.101439   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.101469   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.101635   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:01.101833   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.102026   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.102175   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:01.102322   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:01.102493   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:01.102505   22547 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 10:42:01.208807   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 10:42:01.208869   22547 main.go:141] libmachine: found compatible host: buildroot
	I0729 10:42:01.208879   22547 main.go:141] libmachine: Provisioning with buildroot...
	I0729 10:42:01.208888   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetMachineName
	I0729 10:42:01.209121   22547 buildroot.go:166] provisioning hostname "ha-763049-m03"
	I0729 10:42:01.209152   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetMachineName
	I0729 10:42:01.209365   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.212241   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.212632   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.212663   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.212808   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:01.213004   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.213163   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.213317   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:01.213478   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:01.213676   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:01.213695   22547 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-763049-m03 && echo "ha-763049-m03" | sudo tee /etc/hostname
	I0729 10:42:01.335398   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-763049-m03
	
	I0729 10:42:01.335425   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.338393   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.338771   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.338801   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.339032   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:01.339261   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.339431   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.339578   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:01.339720   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:01.339923   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:01.339942   22547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-763049-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-763049-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-763049-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:42:01.458069   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:42:01.458098   22547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 10:42:01.458120   22547 buildroot.go:174] setting up certificates
	I0729 10:42:01.458134   22547 provision.go:84] configureAuth start
	I0729 10:42:01.458144   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetMachineName
	I0729 10:42:01.458397   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:42:01.460935   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.461235   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.461257   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.461412   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.463357   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.463699   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.463738   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.463892   22547 provision.go:143] copyHostCerts
	I0729 10:42:01.463922   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:42:01.463962   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 10:42:01.463976   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:42:01.464047   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 10:42:01.464121   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:42:01.464138   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 10:42:01.464145   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:42:01.464169   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 10:42:01.464212   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:42:01.464228   22547 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 10:42:01.464234   22547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:42:01.464254   22547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 10:42:01.464299   22547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.ha-763049-m03 san=[127.0.0.1 192.168.39.123 ha-763049-m03 localhost minikube]
	I0729 10:42:01.559347   22547 provision.go:177] copyRemoteCerts
	I0729 10:42:01.559402   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:42:01.559424   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.562058   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.562376   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.562399   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.562589   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:01.562787   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.562953   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:01.563088   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:42:01.646276   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 10:42:01.646354   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:42:01.676817   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 10:42:01.676901   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 10:42:01.703696   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 10:42:01.703771   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 10:42:01.727794   22547 provision.go:87] duration metric: took 269.645701ms to configureAuth
	I0729 10:42:01.727835   22547 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:42:01.728036   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:42:01.728098   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:01.730618   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.731041   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:01.731069   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:01.731216   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:01.731398   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.731554   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:01.731717   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:01.731884   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:01.732030   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:01.732044   22547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:42:02.002725   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:42:02.002756   22547 main.go:141] libmachine: Checking connection to Docker...
	I0729 10:42:02.002765   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetURL
	I0729 10:42:02.004097   22547 main.go:141] libmachine: (ha-763049-m03) DBG | Using libvirt version 6000000
	I0729 10:42:02.006039   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.006323   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.006349   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.006552   22547 main.go:141] libmachine: Docker is up and running!
	I0729 10:42:02.006566   22547 main.go:141] libmachine: Reticulating splines...
	I0729 10:42:02.006572   22547 client.go:171] duration metric: took 25.194564051s to LocalClient.Create
	I0729 10:42:02.006592   22547 start.go:167] duration metric: took 25.194622863s to libmachine.API.Create "ha-763049"
	I0729 10:42:02.006602   22547 start.go:293] postStartSetup for "ha-763049-m03" (driver="kvm2")
	I0729 10:42:02.006615   22547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:42:02.006639   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:02.006915   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:42:02.006944   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:02.009239   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.009607   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.009629   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.009837   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:02.010060   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:02.010220   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:02.010366   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:42:02.098515   22547 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:42:02.103018   22547 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:42:02.103108   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 10:42:02.103243   22547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 10:42:02.103339   22547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 10:42:02.103352   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /etc/ssl/certs/110642.pem
	I0729 10:42:02.103455   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:42:02.113584   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:42:02.138624   22547 start.go:296] duration metric: took 132.00711ms for postStartSetup
	I0729 10:42:02.138682   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetConfigRaw
	I0729 10:42:02.139330   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:42:02.142115   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.142476   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.142507   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.142768   22547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:42:02.143010   22547 start.go:128] duration metric: took 25.350329223s to createHost
	I0729 10:42:02.143059   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:02.145150   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.145538   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.145565   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.145710   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:02.145900   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:02.146075   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:02.146252   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:02.146420   22547 main.go:141] libmachine: Using SSH client type: native
	I0729 10:42:02.146585   22547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0729 10:42:02.146598   22547 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:42:02.251590   22547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249722.231340605
	
	I0729 10:42:02.251619   22547 fix.go:216] guest clock: 1722249722.231340605
	I0729 10:42:02.251626   22547 fix.go:229] Guest: 2024-07-29 10:42:02.231340605 +0000 UTC Remote: 2024-07-29 10:42:02.143036544 +0000 UTC m=+190.099042044 (delta=88.304061ms)
	I0729 10:42:02.251641   22547 fix.go:200] guest clock delta is within tolerance: 88.304061ms
	I0729 10:42:02.251647   22547 start.go:83] releasing machines lock for "ha-763049-m03", held for 25.459111224s
	I0729 10:42:02.251665   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:02.251992   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:42:02.254864   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.255211   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.255239   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.257560   22547 out.go:177] * Found network options:
	I0729 10:42:02.259099   22547 out.go:177]   - NO_PROXY=192.168.39.68,192.168.39.39
	W0729 10:42:02.260378   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 10:42:02.260399   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 10:42:02.260415   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:02.260975   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:02.261155   22547 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:42:02.261266   22547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:42:02.261302   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	W0729 10:42:02.261413   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 10:42:02.261438   22547 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 10:42:02.261502   22547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:42:02.261520   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:42:02.264239   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.264267   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.264585   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.264611   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.264667   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:02.264697   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:02.264715   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:02.264916   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:42:02.264925   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:02.265081   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:02.265102   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:42:02.265231   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:42:02.265275   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:42:02.265436   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:42:02.513293   22547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:42:02.519462   22547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:42:02.519521   22547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:42:02.537820   22547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:42:02.537847   22547 start.go:495] detecting cgroup driver to use...
	I0729 10:42:02.537916   22547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:42:02.556691   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:42:02.572916   22547 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:42:02.572972   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:42:02.587938   22547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:42:02.604178   22547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:42:02.726347   22547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:42:02.889051   22547 docker.go:233] disabling docker service ...
	I0729 10:42:02.889113   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:42:02.904429   22547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:42:02.918427   22547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:42:03.033462   22547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:42:03.158573   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:42:03.175815   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:42:03.196455   22547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:42:03.196523   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.207608   22547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:42:03.207678   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.221815   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.236397   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.247993   22547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:42:03.259518   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.270730   22547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.290394   22547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:42:03.301657   22547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:42:03.311564   22547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 10:42:03.311631   22547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 10:42:03.326084   22547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:42:03.335954   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:42:03.468795   22547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:42:03.613381   22547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:42:03.613459   22547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:42:03.618799   22547 start.go:563] Will wait 60s for crictl version
	I0729 10:42:03.618862   22547 ssh_runner.go:195] Run: which crictl
	I0729 10:42:03.623207   22547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:42:03.664675   22547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:42:03.664766   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:42:03.695015   22547 ssh_runner.go:195] Run: crio --version
	I0729 10:42:03.727157   22547 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:42:03.728545   22547 out.go:177]   - env NO_PROXY=192.168.39.68
	I0729 10:42:03.729751   22547 out.go:177]   - env NO_PROXY=192.168.39.68,192.168.39.39
	I0729 10:42:03.731336   22547 main.go:141] libmachine: (ha-763049-m03) Calling .GetIP
	I0729 10:42:03.734069   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:03.734494   22547 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:42:03.734517   22547 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:42:03.734877   22547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:42:03.739268   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:42:03.752545   22547 mustload.go:65] Loading cluster: ha-763049
	I0729 10:42:03.752761   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:42:03.752994   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:42:03.753027   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:42:03.768550   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44491
	I0729 10:42:03.769040   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:42:03.769521   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:42:03.769549   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:42:03.769908   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:42:03.770102   22547 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:42:03.771791   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:42:03.772073   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:42:03.772111   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:42:03.787097   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0729 10:42:03.787507   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:42:03.787989   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:42:03.788010   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:42:03.788396   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:42:03.788570   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:42:03.788754   22547 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049 for IP: 192.168.39.123
	I0729 10:42:03.788768   22547 certs.go:194] generating shared ca certs ...
	I0729 10:42:03.788785   22547 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:42:03.788933   22547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 10:42:03.788985   22547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 10:42:03.788997   22547 certs.go:256] generating profile certs ...
	I0729 10:42:03.789100   22547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key
	I0729 10:42:03.789134   22547 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.b4bf9b16
	I0729 10:42:03.789153   22547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.b4bf9b16 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.39 192.168.39.123 192.168.39.254]
	I0729 10:42:04.432556   22547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.b4bf9b16 ...
	I0729 10:42:04.432587   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.b4bf9b16: {Name:mk54eba0cd0267f06fc79c42e90265a04854925c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:42:04.432746   22547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.b4bf9b16 ...
	I0729 10:42:04.432760   22547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.b4bf9b16: {Name:mkce1453ef8f6513dd27f14d0c85cf6052412e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:42:04.432832   22547 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.b4bf9b16 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt
	I0729 10:42:04.432958   22547 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.b4bf9b16 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key
	I0729 10:42:04.433078   22547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key
	I0729 10:42:04.433092   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 10:42:04.433103   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 10:42:04.433117   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 10:42:04.433129   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 10:42:04.433141   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 10:42:04.433154   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 10:42:04.433165   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 10:42:04.433178   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 10:42:04.433224   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 10:42:04.433250   22547 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 10:42:04.433259   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:42:04.433279   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:42:04.433301   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:42:04.433321   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 10:42:04.433355   22547 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:42:04.433379   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem -> /usr/share/ca-certificates/11064.pem
	I0729 10:42:04.433393   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /usr/share/ca-certificates/110642.pem
	I0729 10:42:04.433405   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:42:04.433435   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:42:04.436322   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:42:04.436735   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:42:04.436757   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:42:04.436958   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:42:04.437187   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:42:04.437331   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:42:04.437475   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:42:04.511056   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 10:42:04.517183   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 10:42:04.530480   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 10:42:04.535185   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 10:42:04.549387   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 10:42:04.554833   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 10:42:04.566042   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 10:42:04.571086   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 10:42:04.582122   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 10:42:04.586639   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 10:42:04.598544   22547 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 10:42:04.603197   22547 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 10:42:04.614394   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:42:04.641514   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:42:04.666585   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:42:04.692061   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:42:04.716643   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 10:42:04.742843   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 10:42:04.768917   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:42:04.794551   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:42:04.819655   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 10:42:04.852046   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 10:42:04.880019   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:42:04.905238   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 10:42:04.923313   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 10:42:04.942265   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 10:42:04.961839   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 10:42:04.979759   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 10:42:04.997918   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 10:42:05.015994   22547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 10:42:05.033334   22547 ssh_runner.go:195] Run: openssl version
	I0729 10:42:05.039448   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:42:05.051189   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:42:05.056006   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:42:05.056059   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:42:05.061814   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:42:05.073121   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 10:42:05.084028   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 10:42:05.088547   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 10:42:05.088610   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 10:42:05.095648   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 10:42:05.109825   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 10:42:05.121971   22547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 10:42:05.126482   22547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 10:42:05.126536   22547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 10:42:05.132602   22547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:42:05.144221   22547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:42:05.148550   22547 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 10:42:05.148613   22547 kubeadm.go:934] updating node {m03 192.168.39.123 8443 v1.30.3 crio true true} ...
	I0729 10:42:05.148724   22547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-763049-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:42:05.148754   22547 kube-vip.go:115] generating kube-vip config ...
	I0729 10:42:05.148797   22547 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 10:42:05.167127   22547 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 10:42:05.167200   22547 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 10:42:05.167265   22547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:42:05.178118   22547 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 10:42:05.178183   22547 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 10:42:05.188804   22547 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 10:42:05.188819   22547 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 10:42:05.188805   22547 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 10:42:05.188844   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 10:42:05.188861   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 10:42:05.188866   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:42:05.188930   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 10:42:05.188937   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 10:42:05.204877   22547 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 10:42:05.204922   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 10:42:05.204946   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 10:42:05.204954   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 10:42:05.204975   22547 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 10:42:05.204978   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 10:42:05.217596   22547 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 10:42:05.217632   22547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 10:42:06.199315   22547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 10:42:06.209678   22547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 10:42:06.227689   22547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:42:06.247715   22547 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 10:42:06.265593   22547 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 10:42:06.269663   22547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:42:06.282648   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:42:06.398587   22547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:42:06.416025   22547 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:42:06.416342   22547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:42:06.416384   22547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:42:06.431983   22547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0729 10:42:06.432483   22547 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:42:06.432998   22547 main.go:141] libmachine: Using API Version  1
	I0729 10:42:06.433017   22547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:42:06.433359   22547 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:42:06.433543   22547 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:42:06.433668   22547 start.go:317] joinCluster: &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:42:06.433837   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 10:42:06.433855   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:42:06.437148   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:42:06.437595   22547 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:42:06.437620   22547 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:42:06.437845   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:42:06.438022   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:42:06.438212   22547 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:42:06.438360   22547 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:42:06.603924   22547 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:42:06.603974   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s82u6a.99t20mc3mt933nji --discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-763049-m03 --control-plane --apiserver-advertise-address=192.168.39.123 --apiserver-bind-port=8443"
	I0729 10:42:30.505125   22547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s82u6a.99t20mc3mt933nji --discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-763049-m03 --control-plane --apiserver-advertise-address=192.168.39.123 --apiserver-bind-port=8443": (23.901124908s)
	I0729 10:42:30.505163   22547 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 10:42:31.144168   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-763049-m03 minikube.k8s.io/updated_at=2024_07_29T10_42_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=ha-763049 minikube.k8s.io/primary=false
	I0729 10:42:31.265725   22547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-763049-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 10:42:31.385404   22547 start.go:319] duration metric: took 24.951730695s to joinCluster
	I0729 10:42:31.385486   22547 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:42:31.385796   22547 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:42:31.387054   22547 out.go:177] * Verifying Kubernetes components...
	I0729 10:42:31.388293   22547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:42:31.676511   22547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:42:31.694385   22547 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:42:31.694727   22547 kapi.go:59] client config for ha-763049: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.crt", KeyFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key", CAFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 10:42:31.694814   22547 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.68:8443
	I0729 10:42:31.695067   22547 node_ready.go:35] waiting up to 6m0s for node "ha-763049-m03" to be "Ready" ...
	I0729 10:42:31.695149   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:31.695160   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:31.695171   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:31.695178   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:31.699115   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:32.195961   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:32.195988   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:32.195999   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:32.196006   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:32.200044   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:32.695406   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:32.695429   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:32.695439   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:32.695444   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:32.699072   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:33.195969   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:33.196000   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:33.196011   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:33.196017   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:33.199646   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:33.695318   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:33.695341   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:33.695349   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:33.695352   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:33.698906   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:33.699573   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:34.195399   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:34.195419   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:34.195426   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:34.195430   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:34.198775   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:34.695985   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:34.696005   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:34.696012   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:34.696015   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:34.699228   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:35.195283   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:35.195312   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:35.195322   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:35.195331   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:35.198836   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:35.695929   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:35.695954   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:35.695965   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:35.695975   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:35.699291   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:35.700070   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:36.195218   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:36.195237   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:36.195245   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:36.195250   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:36.198381   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:36.695672   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:36.695692   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:36.695699   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:36.695703   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:36.699011   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:37.195840   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:37.195866   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:37.195875   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:37.195879   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:37.199512   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:37.695967   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:37.695986   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:37.695994   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:37.695999   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:37.699898   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:37.700493   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:38.195896   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:38.195918   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:38.195924   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:38.195928   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:38.199317   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:38.695784   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:38.695833   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:38.695842   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:38.695846   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:38.699555   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:39.195874   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:39.195900   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:39.195908   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:39.195913   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:39.198957   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:39.695951   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:39.695978   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:39.695989   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:39.695995   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:39.699657   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:39.700709   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:40.195783   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:40.195808   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:40.195816   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:40.195820   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:40.199500   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:40.695534   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:40.695555   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:40.695568   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:40.695575   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:40.699277   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:41.195233   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:41.195258   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:41.195270   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:41.195276   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:41.198637   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:41.695981   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:41.696001   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:41.696009   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:41.696013   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:41.699726   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:42.195585   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:42.195606   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:42.195614   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:42.195617   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:42.199514   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:42.200029   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:42.695963   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:42.695986   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:42.695994   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:42.695997   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:42.699817   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:43.195465   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:43.195492   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:43.195503   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:43.195510   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:43.199125   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:43.695950   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:43.695971   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:43.695980   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:43.695984   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:43.699972   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:44.195258   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:44.195279   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:44.195287   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:44.195290   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:44.198755   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:44.695635   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:44.695660   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:44.695669   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:44.695672   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:44.699878   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:44.700573   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:45.195256   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:45.195277   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:45.195285   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:45.195290   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:45.198767   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:45.695996   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:45.696018   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:45.696025   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:45.696031   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:45.700276   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:46.195351   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:46.195377   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:46.195387   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:46.195394   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:46.199540   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:46.695304   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:46.695326   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:46.695338   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:46.695342   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:46.698665   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:47.196204   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:47.196224   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:47.196233   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:47.196238   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:47.199672   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:47.200315   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:47.695941   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:47.695966   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:47.695977   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:47.695982   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:47.699345   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:48.195978   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:48.196000   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:48.196010   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:48.196015   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:48.199859   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:48.695953   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:48.695975   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:48.695982   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:48.695986   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:48.699516   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:49.196239   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:49.196261   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.196272   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.196277   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.204307   22547 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 10:42:49.204859   22547 node_ready.go:53] node "ha-763049-m03" has status "Ready":"False"
	I0729 10:42:49.696187   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:49.696208   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.696216   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.696224   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.699852   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:49.700522   22547 node_ready.go:49] node "ha-763049-m03" has status "Ready":"True"
	I0729 10:42:49.700542   22547 node_ready.go:38] duration metric: took 18.005457219s for node "ha-763049-m03" to be "Ready" ...
	I0729 10:42:49.700554   22547 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:42:49.700625   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:42:49.700639   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.700649   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.700659   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.708243   22547 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 10:42:49.717626   22547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.717729   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-l4n5p
	I0729 10:42:49.717742   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.717752   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.717761   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.720907   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:49.721510   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:49.721524   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.721532   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.721536   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.724105   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.724664   22547 pod_ready.go:92] pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:49.724682   22547 pod_ready.go:81] duration metric: took 7.026201ms for pod "coredns-7db6d8ff4d-l4n5p" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.724694   22547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.724743   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xxwnd
	I0729 10:42:49.724750   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.724758   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.724764   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.727647   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.728574   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:49.728587   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.728594   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.728599   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.731289   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.731762   22547 pod_ready.go:92] pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:49.731781   22547 pod_ready.go:81] duration metric: took 7.077531ms for pod "coredns-7db6d8ff4d-xxwnd" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.731792   22547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.731853   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049
	I0729 10:42:49.731864   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.731883   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.731891   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.734425   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.735055   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:49.735070   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.735080   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.735084   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.737477   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.738099   22547 pod_ready.go:92] pod "etcd-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:49.738114   22547 pod_ready.go:81] duration metric: took 6.314888ms for pod "etcd-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.738123   22547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.738169   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049-m02
	I0729 10:42:49.738175   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.738183   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.738188   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.740760   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.741292   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:49.741304   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.741311   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.741315   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.743846   22547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 10:42:49.744265   22547 pod_ready.go:92] pod "etcd-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:49.744287   22547 pod_ready.go:81] duration metric: took 6.154185ms for pod "etcd-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.744299   22547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:49.896690   22547 request.go:629] Waited for 152.298095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049-m03
	I0729 10:42:49.896762   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-763049-m03
	I0729 10:42:49.896769   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:49.896779   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:49.896791   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:49.900114   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:50.097131   22547 request.go:629] Waited for 196.262635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:50.097200   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:50.097210   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:50.097223   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:50.097231   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:50.102548   22547 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 10:42:50.103375   22547 pod_ready.go:92] pod "etcd-ha-763049-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:50.103401   22547 pod_ready.go:81] duration metric: took 359.075866ms for pod "etcd-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:50.103427   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:50.296496   22547 request.go:629] Waited for 192.981775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049
	I0729 10:42:50.296566   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049
	I0729 10:42:50.296574   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:50.296584   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:50.296594   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:50.300311   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:50.496524   22547 request.go:629] Waited for 195.383206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:50.496591   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:50.496596   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:50.496604   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:50.496607   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:50.499994   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:50.500616   22547 pod_ready.go:92] pod "kube-apiserver-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:50.500632   22547 pod_ready.go:81] duration metric: took 397.197271ms for pod "kube-apiserver-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:50.500641   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:50.696778   22547 request.go:629] Waited for 196.070156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m02
	I0729 10:42:50.696866   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m02
	I0729 10:42:50.696873   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:50.696880   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:50.696885   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:50.700503   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:50.896594   22547 request.go:629] Waited for 195.383469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:50.896663   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:50.896670   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:50.896682   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:50.896693   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:50.899978   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:50.900789   22547 pod_ready.go:92] pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:50.900807   22547 pod_ready.go:81] duration metric: took 400.160228ms for pod "kube-apiserver-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:50.900817   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:51.096852   22547 request.go:629] Waited for 195.971553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m03
	I0729 10:42:51.096920   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-763049-m03
	I0729 10:42:51.096928   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:51.096938   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:51.096953   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:51.100172   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:51.297170   22547 request.go:629] Waited for 196.426188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:51.297229   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:51.297236   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:51.297245   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:51.297252   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:51.301071   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:51.301735   22547 pod_ready.go:92] pod "kube-apiserver-ha-763049-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:51.301756   22547 pod_ready.go:81] duration metric: took 400.929181ms for pod "kube-apiserver-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:51.301768   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:51.497240   22547 request.go:629] Waited for 195.410619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049
	I0729 10:42:51.497294   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049
	I0729 10:42:51.497299   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:51.497306   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:51.497310   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:51.501004   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:51.696564   22547 request.go:629] Waited for 194.875696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:51.696618   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:51.696624   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:51.696634   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:51.696647   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:51.699832   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:51.700436   22547 pod_ready.go:92] pod "kube-controller-manager-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:51.700453   22547 pod_ready.go:81] duration metric: took 398.676665ms for pod "kube-controller-manager-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:51.700462   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:51.897112   22547 request.go:629] Waited for 196.578899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m02
	I0729 10:42:51.897178   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m02
	I0729 10:42:51.897185   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:51.897196   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:51.897203   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:51.900645   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.096984   22547 request.go:629] Waited for 195.392706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:52.097046   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:52.097052   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:52.097063   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:52.097075   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:52.100380   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.100903   22547 pod_ready.go:92] pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:52.100924   22547 pod_ready.go:81] duration metric: took 400.455217ms for pod "kube-controller-manager-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:52.100937   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:52.296526   22547 request.go:629] Waited for 195.503229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m03
	I0729 10:42:52.296592   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-763049-m03
	I0729 10:42:52.296599   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:52.296609   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:52.296616   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:52.300446   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.496414   22547 request.go:629] Waited for 195.329562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:52.496475   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:52.496482   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:52.496492   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:52.496498   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:52.499842   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.500508   22547 pod_ready.go:92] pod "kube-controller-manager-ha-763049-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:52.500527   22547 pod_ready.go:81] duration metric: took 399.581822ms for pod "kube-controller-manager-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:52.500540   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mhbk7" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:52.696694   22547 request.go:629] Waited for 196.085782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhbk7
	I0729 10:42:52.696758   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhbk7
	I0729 10:42:52.696772   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:52.696793   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:52.696817   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:52.700067   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.897137   22547 request.go:629] Waited for 196.37844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:52.897195   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:52.897200   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:52.897208   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:52.897213   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:52.901001   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:52.901699   22547 pod_ready.go:92] pod "kube-proxy-mhbk7" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:52.901717   22547 pod_ready.go:81] duration metric: took 401.169546ms for pod "kube-proxy-mhbk7" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:52.901726   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tf7wt" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:53.096942   22547 request.go:629] Waited for 195.158447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf7wt
	I0729 10:42:53.096999   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf7wt
	I0729 10:42:53.097015   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:53.097044   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:53.097053   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:53.100648   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:53.296703   22547 request.go:629] Waited for 195.252275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:53.296773   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:53.296778   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:53.296788   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:53.296815   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:53.300568   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:53.301242   22547 pod_ready.go:92] pod "kube-proxy-tf7wt" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:53.301258   22547 pod_ready.go:81] duration metric: took 399.526279ms for pod "kube-proxy-tf7wt" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:53.301267   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xhcs8" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:53.496295   22547 request.go:629] Waited for 194.965389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xhcs8
	I0729 10:42:53.496364   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xhcs8
	I0729 10:42:53.496369   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:53.496376   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:53.496381   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:53.500456   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:53.696797   22547 request.go:629] Waited for 195.365519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:53.696863   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:53.696871   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:53.696879   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:53.696887   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:53.700540   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:53.700963   22547 pod_ready.go:92] pod "kube-proxy-xhcs8" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:53.700981   22547 pod_ready.go:81] duration metric: took 399.707109ms for pod "kube-proxy-xhcs8" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:53.700992   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:53.896974   22547 request.go:629] Waited for 195.91913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049
	I0729 10:42:53.897026   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049
	I0729 10:42:53.897031   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:53.897038   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:53.897043   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:53.900549   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:54.096572   22547 request.go:629] Waited for 195.420483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:54.096623   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049
	I0729 10:42:54.096629   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.096637   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.096641   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.100058   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:54.100648   22547 pod_ready.go:92] pod "kube-scheduler-ha-763049" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:54.100667   22547 pod_ready.go:81] duration metric: took 399.666776ms for pod "kube-scheduler-ha-763049" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:54.100678   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:54.297109   22547 request.go:629] Waited for 196.357909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m02
	I0729 10:42:54.297167   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m02
	I0729 10:42:54.297174   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.297184   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.297190   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.301126   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:54.497154   22547 request.go:629] Waited for 195.387946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:54.497229   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m02
	I0729 10:42:54.497236   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.497247   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.497254   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.501472   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:54.501982   22547 pod_ready.go:92] pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:54.502001   22547 pod_ready.go:81] duration metric: took 401.314896ms for pod "kube-scheduler-ha-763049-m02" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:54.502010   22547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:54.697078   22547 request.go:629] Waited for 194.982364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m03
	I0729 10:42:54.697145   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-763049-m03
	I0729 10:42:54.697152   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.697162   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.697171   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.701333   22547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 10:42:54.896252   22547 request.go:629] Waited for 194.300295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:54.896312   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-763049-m03
	I0729 10:42:54.896319   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.896329   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.896335   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.900102   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:54.900648   22547 pod_ready.go:92] pod "kube-scheduler-ha-763049-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 10:42:54.900663   22547 pod_ready.go:81] duration metric: took 398.647202ms for pod "kube-scheduler-ha-763049-m03" in "kube-system" namespace to be "Ready" ...
	I0729 10:42:54.900675   22547 pod_ready.go:38] duration metric: took 5.200108915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:42:54.900695   22547 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:42:54.900753   22547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:42:54.918082   22547 api_server.go:72] duration metric: took 23.532561601s to wait for apiserver process to appear ...
	I0729 10:42:54.918105   22547 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:42:54.918124   22547 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0729 10:42:54.922482   22547 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0729 10:42:54.922555   22547 round_trippers.go:463] GET https://192.168.39.68:8443/version
	I0729 10:42:54.922567   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:54.922577   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:54.922582   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:54.923567   22547 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 10:42:54.923630   22547 api_server.go:141] control plane version: v1.30.3
	I0729 10:42:54.923647   22547 api_server.go:131] duration metric: took 5.534322ms to wait for apiserver health ...
	I0729 10:42:54.923658   22547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 10:42:55.096250   22547 request.go:629] Waited for 172.526278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:42:55.096308   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:42:55.096315   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:55.096325   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:55.096329   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:55.103099   22547 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 10:42:55.110339   22547 system_pods.go:59] 24 kube-system pods found
	I0729 10:42:55.110371   22547 system_pods.go:61] "coredns-7db6d8ff4d-l4n5p" [d8f32893-3406-4eed-990f-f490efab94d6] Running
	I0729 10:42:55.110375   22547 system_pods.go:61] "coredns-7db6d8ff4d-xxwnd" [76efda45-4871-46fb-8a27-2e94f75de9f4] Running
	I0729 10:42:55.110379   22547 system_pods.go:61] "etcd-ha-763049" [b16c09bc-d43b-4593-ae0c-a2b9f3500d64] Running
	I0729 10:42:55.110382   22547 system_pods.go:61] "etcd-ha-763049-m02" [8c207076-2477-4f60-8957-3a25961c47ae] Running
	I0729 10:42:55.110386   22547 system_pods.go:61] "etcd-ha-763049-m03" [204d285b-f87f-43e4-9ed4-af013eec6ec3] Running
	I0729 10:42:55.110389   22547 system_pods.go:61] "kindnet-567mx" [a6b03c26-f15c-49ba-9f6b-a487a9cf75e6] Running
	I0729 10:42:55.110391   22547 system_pods.go:61] "kindnet-596ll" [19026f3d-836d-499b-91fa-e1a957cbad76] Running
	I0729 10:42:55.110396   22547 system_pods.go:61] "kindnet-fdmh5" [4ed222fa-9517-42bb-bbde-6632f91bda05] Running
	I0729 10:42:55.110399   22547 system_pods.go:61] "kube-apiserver-ha-763049" [a6f83bf6-9aab-4141-8f71-b5080c69a2f5] Running
	I0729 10:42:55.110402   22547 system_pods.go:61] "kube-apiserver-ha-763049-m02" [541ec415-7d3c-4cc8-a357-89b8f58fedb4] Running
	I0729 10:42:55.110406   22547 system_pods.go:61] "kube-apiserver-ha-763049-m03" [c23bc29f-d338-4278-bd55-ff5bf69b54a7] Running
	I0729 10:42:55.110412   22547 system_pods.go:61] "kube-controller-manager-ha-763049" [0418a717-be1c-49d8-bf44-b702209730e1] Running
	I0729 10:42:55.110416   22547 system_pods.go:61] "kube-controller-manager-ha-763049-m02" [249b8e43-6745-4ddc-844b-901568a9a8b6] Running
	I0729 10:42:55.110423   22547 system_pods.go:61] "kube-controller-manager-ha-763049-m03" [f5992b20-fb58-45d6-8fd4-e377ad3ab86f] Running
	I0729 10:42:55.110430   22547 system_pods.go:61] "kube-proxy-mhbk7" [b05b91ac-ef64-4bd2-9824-83723bddfef7] Running
	I0729 10:42:55.110438   22547 system_pods.go:61] "kube-proxy-tf7wt" [c6875d82-c011-4b56-8b51-0ec9f8ddb78a] Running
	I0729 10:42:55.110442   22547 system_pods.go:61] "kube-proxy-xhcs8" [34b5c03d-5eee-43e6-84e4-4c99bc710966] Running
	I0729 10:42:55.110448   22547 system_pods.go:61] "kube-scheduler-ha-763049" [35288476-ae47-4955-bc9e-bdcf14642347] Running
	I0729 10:42:55.110457   22547 system_pods.go:61] "kube-scheduler-ha-763049-m02" [46df5d86-319e-4552-a864-802abcaf1376] Running
	I0729 10:42:55.110460   22547 system_pods.go:61] "kube-scheduler-ha-763049-m03" [e734bd61-8b59-4feb-8dba-be4621887225] Running
	I0729 10:42:55.110463   22547 system_pods.go:61] "kube-vip-ha-763049" [5f88bfd4-d887-4989-bf71-7a4459aa6655] Running
	I0729 10:42:55.110465   22547 system_pods.go:61] "kube-vip-ha-763049-m02" [f3ffc4c1-73a9-437c-87b2-b580550d4726] Running
	I0729 10:42:55.110468   22547 system_pods.go:61] "kube-vip-ha-763049-m03" [f4fadd4e-72f9-4506-b40b-35a8f6cc8dd4] Running
	I0729 10:42:55.110471   22547 system_pods.go:61] "storage-provisioner" [d48db391-d5bb-4974-88d7-f5c71e3edb4a] Running
	I0729 10:42:55.110478   22547 system_pods.go:74] duration metric: took 186.8141ms to wait for pod list to return data ...
	I0729 10:42:55.110488   22547 default_sa.go:34] waiting for default service account to be created ...
	I0729 10:42:55.296937   22547 request.go:629] Waited for 186.365135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0729 10:42:55.296993   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0729 10:42:55.296999   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:55.297007   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:55.297015   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:55.300360   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:55.300456   22547 default_sa.go:45] found service account: "default"
	I0729 10:42:55.300472   22547 default_sa.go:55] duration metric: took 189.975295ms for default service account to be created ...
	I0729 10:42:55.300482   22547 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 10:42:55.496927   22547 request.go:629] Waited for 196.365003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:42:55.496996   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0729 10:42:55.497004   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:55.497016   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:55.497027   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:55.504770   22547 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 10:42:55.511358   22547 system_pods.go:86] 24 kube-system pods found
	I0729 10:42:55.511385   22547 system_pods.go:89] "coredns-7db6d8ff4d-l4n5p" [d8f32893-3406-4eed-990f-f490efab94d6] Running
	I0729 10:42:55.511391   22547 system_pods.go:89] "coredns-7db6d8ff4d-xxwnd" [76efda45-4871-46fb-8a27-2e94f75de9f4] Running
	I0729 10:42:55.511395   22547 system_pods.go:89] "etcd-ha-763049" [b16c09bc-d43b-4593-ae0c-a2b9f3500d64] Running
	I0729 10:42:55.511400   22547 system_pods.go:89] "etcd-ha-763049-m02" [8c207076-2477-4f60-8957-3a25961c47ae] Running
	I0729 10:42:55.511404   22547 system_pods.go:89] "etcd-ha-763049-m03" [204d285b-f87f-43e4-9ed4-af013eec6ec3] Running
	I0729 10:42:55.511408   22547 system_pods.go:89] "kindnet-567mx" [a6b03c26-f15c-49ba-9f6b-a487a9cf75e6] Running
	I0729 10:42:55.511412   22547 system_pods.go:89] "kindnet-596ll" [19026f3d-836d-499b-91fa-e1a957cbad76] Running
	I0729 10:42:55.511415   22547 system_pods.go:89] "kindnet-fdmh5" [4ed222fa-9517-42bb-bbde-6632f91bda05] Running
	I0729 10:42:55.511419   22547 system_pods.go:89] "kube-apiserver-ha-763049" [a6f83bf6-9aab-4141-8f71-b5080c69a2f5] Running
	I0729 10:42:55.511423   22547 system_pods.go:89] "kube-apiserver-ha-763049-m02" [541ec415-7d3c-4cc8-a357-89b8f58fedb4] Running
	I0729 10:42:55.511427   22547 system_pods.go:89] "kube-apiserver-ha-763049-m03" [c23bc29f-d338-4278-bd55-ff5bf69b54a7] Running
	I0729 10:42:55.511432   22547 system_pods.go:89] "kube-controller-manager-ha-763049" [0418a717-be1c-49d8-bf44-b702209730e1] Running
	I0729 10:42:55.511437   22547 system_pods.go:89] "kube-controller-manager-ha-763049-m02" [249b8e43-6745-4ddc-844b-901568a9a8b6] Running
	I0729 10:42:55.511442   22547 system_pods.go:89] "kube-controller-manager-ha-763049-m03" [f5992b20-fb58-45d6-8fd4-e377ad3ab86f] Running
	I0729 10:42:55.511445   22547 system_pods.go:89] "kube-proxy-mhbk7" [b05b91ac-ef64-4bd2-9824-83723bddfef7] Running
	I0729 10:42:55.511453   22547 system_pods.go:89] "kube-proxy-tf7wt" [c6875d82-c011-4b56-8b51-0ec9f8ddb78a] Running
	I0729 10:42:55.511457   22547 system_pods.go:89] "kube-proxy-xhcs8" [34b5c03d-5eee-43e6-84e4-4c99bc710966] Running
	I0729 10:42:55.511463   22547 system_pods.go:89] "kube-scheduler-ha-763049" [35288476-ae47-4955-bc9e-bdcf14642347] Running
	I0729 10:42:55.511466   22547 system_pods.go:89] "kube-scheduler-ha-763049-m02" [46df5d86-319e-4552-a864-802abcaf1376] Running
	I0729 10:42:55.511470   22547 system_pods.go:89] "kube-scheduler-ha-763049-m03" [e734bd61-8b59-4feb-8dba-be4621887225] Running
	I0729 10:42:55.511476   22547 system_pods.go:89] "kube-vip-ha-763049" [5f88bfd4-d887-4989-bf71-7a4459aa6655] Running
	I0729 10:42:55.511480   22547 system_pods.go:89] "kube-vip-ha-763049-m02" [f3ffc4c1-73a9-437c-87b2-b580550d4726] Running
	I0729 10:42:55.511483   22547 system_pods.go:89] "kube-vip-ha-763049-m03" [f4fadd4e-72f9-4506-b40b-35a8f6cc8dd4] Running
	I0729 10:42:55.511487   22547 system_pods.go:89] "storage-provisioner" [d48db391-d5bb-4974-88d7-f5c71e3edb4a] Running
	I0729 10:42:55.511494   22547 system_pods.go:126] duration metric: took 211.008008ms to wait for k8s-apps to be running ...
	I0729 10:42:55.511502   22547 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 10:42:55.511546   22547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:42:55.527956   22547 system_svc.go:56] duration metric: took 16.443555ms WaitForService to wait for kubelet
	I0729 10:42:55.527997   22547 kubeadm.go:582] duration metric: took 24.142473175s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:42:55.528024   22547 node_conditions.go:102] verifying NodePressure condition ...
	I0729 10:42:55.696510   22547 request.go:629] Waited for 168.417638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes
	I0729 10:42:55.696562   22547 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes
	I0729 10:42:55.696569   22547 round_trippers.go:469] Request Headers:
	I0729 10:42:55.696577   22547 round_trippers.go:473]     Accept: application/json, */*
	I0729 10:42:55.696582   22547 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 10:42:55.700452   22547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 10:42:55.701677   22547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:42:55.701699   22547 node_conditions.go:123] node cpu capacity is 2
	I0729 10:42:55.701711   22547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:42:55.701715   22547 node_conditions.go:123] node cpu capacity is 2
	I0729 10:42:55.701719   22547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 10:42:55.701722   22547 node_conditions.go:123] node cpu capacity is 2
	I0729 10:42:55.701726   22547 node_conditions.go:105] duration metric: took 173.697622ms to run NodePressure ...
	I0729 10:42:55.701740   22547 start.go:241] waiting for startup goroutines ...
	I0729 10:42:55.701761   22547 start.go:255] writing updated cluster config ...
	I0729 10:42:55.702068   22547 ssh_runner.go:195] Run: rm -f paused
	I0729 10:42:55.755723   22547 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 10:42:55.758083   22547 out.go:177] * Done! kubectl is now configured to use "ha-763049" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.381556599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4227e720-9d5d-460d-a958-75cc303afb02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.381952016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722249780144150734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606596453808,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606590846437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47,PodSandboxId:55dc127b99c57f6fc7c05f6ee8f2e16bc6179ddd6eb632634cf88acf496572da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722249606500055312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722249594537338342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172224959
0199139099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9,PodSandboxId:d0a05f086a85bd412b01fda75459ba0b28652084bbf87bdb6086ef430e546817,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222495715
37927164,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ed60b027789c76247cc6cad30afff1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722249568437047287,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804,PodSandboxId:bdac6ea650c04df4df36afce008eb934ce4666b4a02de5943706bddb6f71ae1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722249568387025631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722249568433190612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d,PodSandboxId:09676447e6cf05d6e7675e61b753825fc00824e88a41bc41b8a57eba297d8f5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722249568286225141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4227e720-9d5d-460d-a958-75cc303afb02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.422343577Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2380b6c9-4372-4784-a610-a5295dbce399 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.422437341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2380b6c9-4372-4784-a610-a5295dbce399 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.423914501Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f054c6bb-ba9b-4dee-b28a-95358505ac33 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.424468387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250059424444435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f054c6bb-ba9b-4dee-b28a-95358505ac33 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.425097636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b53d802d-6999-42ac-aad8-a15cb4bd7d2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.425168282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b53d802d-6999-42ac-aad8-a15cb4bd7d2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.425396121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722249780144150734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606596453808,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606590846437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47,PodSandboxId:55dc127b99c57f6fc7c05f6ee8f2e16bc6179ddd6eb632634cf88acf496572da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722249606500055312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722249594537338342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172224959
0199139099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9,PodSandboxId:d0a05f086a85bd412b01fda75459ba0b28652084bbf87bdb6086ef430e546817,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222495715
37927164,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ed60b027789c76247cc6cad30afff1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722249568437047287,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804,PodSandboxId:bdac6ea650c04df4df36afce008eb934ce4666b4a02de5943706bddb6f71ae1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722249568387025631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722249568433190612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d,PodSandboxId:09676447e6cf05d6e7675e61b753825fc00824e88a41bc41b8a57eba297d8f5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722249568286225141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b53d802d-6999-42ac-aad8-a15cb4bd7d2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.436935413Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=dbedc7a4-7539-4ec2-a4f5-9456e3a76e4f name=/runtime.v1.RuntimeService/Status
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.437035009Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=dbedc7a4-7539-4ec2-a4f5-9456e3a76e4f name=/runtime.v1.RuntimeService/Status
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.471003004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e542de70-6551-45c1-b5ad-042fccb58f26 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.471094175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e542de70-6551-45c1-b5ad-042fccb58f26 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.472435658Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=137b70c5-4f25-41f7-b7e9-f0a29b7da984 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.472977189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250059472953635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=137b70c5-4f25-41f7-b7e9-f0a29b7da984 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.473687584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd3979ee-f004-4372-988a-6d584bb7b52a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.473787874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd3979ee-f004-4372-988a-6d584bb7b52a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.474025006Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722249780144150734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606596453808,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606590846437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47,PodSandboxId:55dc127b99c57f6fc7c05f6ee8f2e16bc6179ddd6eb632634cf88acf496572da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722249606500055312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722249594537338342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172224959
0199139099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9,PodSandboxId:d0a05f086a85bd412b01fda75459ba0b28652084bbf87bdb6086ef430e546817,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222495715
37927164,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ed60b027789c76247cc6cad30afff1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722249568437047287,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804,PodSandboxId:bdac6ea650c04df4df36afce008eb934ce4666b4a02de5943706bddb6f71ae1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722249568387025631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722249568433190612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d,PodSandboxId:09676447e6cf05d6e7675e61b753825fc00824e88a41bc41b8a57eba297d8f5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722249568286225141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd3979ee-f004-4372-988a-6d584bb7b52a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.516547435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac088323-13e2-4b98-b352-2257734754cd name=/runtime.v1.RuntimeService/Version
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.516641486Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac088323-13e2-4b98-b352-2257734754cd name=/runtime.v1.RuntimeService/Version
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.517729393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f299390f-19e1-4e46-ac0b-a9235972ef69 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.518239642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250059518216169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f299390f-19e1-4e46-ac0b-a9235972ef69 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.518910719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17908fc7-fa60-46b3-aff8-86bf9125d8a5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.518979030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17908fc7-fa60-46b3-aff8-86bf9125d8a5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:47:39 ha-763049 crio[679]: time="2024-07-29 10:47:39.519221522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722249780144150734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606596453808,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722249606590846437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47,PodSandboxId:55dc127b99c57f6fc7c05f6ee8f2e16bc6179ddd6eb632634cf88acf496572da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722249606500055312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722249594537338342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172224959
0199139099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9,PodSandboxId:d0a05f086a85bd412b01fda75459ba0b28652084bbf87bdb6086ef430e546817,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222495715
37927164,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ed60b027789c76247cc6cad30afff1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722249568437047287,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804,PodSandboxId:bdac6ea650c04df4df36afce008eb934ce4666b4a02de5943706bddb6f71ae1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722249568387025631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722249568433190612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d,PodSandboxId:09676447e6cf05d6e7675e61b753825fc00824e88a41bc41b8a57eba297d8f5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722249568286225141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17908fc7-fa60-46b3-aff8-86bf9125d8a5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1cbf3ef31451       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   317257a7e3939       busybox-fc5497c4f-6s8vm
	5d7c5ba61589d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   83fa37df3ce80       coredns-7db6d8ff4d-xxwnd
	d2f12f3773838       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   1b78edf6f66dc       coredns-7db6d8ff4d-l4n5p
	752618ed171ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   55dc127b99c57       storage-provisioner
	d9b83381cff6c       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   ba95977795c59       kindnet-fdmh5
	db640a7c00be2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   374e9c4294dfb       kube-proxy-mhbk7
	25081f768fa7c       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   d0a05f086a85b       kube-vip-ha-763049
	46540b0fd864e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   d0a2c28776819       etcd-ha-763049
	c31bbb31aa5f3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago       Running             kube-scheduler            0                   d4383fe572e51       kube-scheduler-ha-763049
	e1dddce207d23       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago       Running             kube-apiserver            0                   bdac6ea650c04       kube-apiserver-ha-763049
	5a0bf98403fc7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago       Running             kube-controller-manager   0                   09676447e6cf0       kube-controller-manager-ha-763049
	
	
	==> coredns [5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5] <==
	[INFO] 10.244.1.2:42496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001766762s
	[INFO] 10.244.0.4:47800 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.014284272s
	[INFO] 10.244.2.2:36542 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233506s
	[INFO] 10.244.2.2:35802 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170528s
	[INFO] 10.244.2.2:33377 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000253157s
	[INFO] 10.244.1.2:43934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123259s
	[INFO] 10.244.1.2:52875 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143409s
	[INFO] 10.244.1.2:46242 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001561535s
	[INFO] 10.244.1.2:50316 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101745s
	[INFO] 10.244.1.2:44298 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140588s
	[INFO] 10.244.1.2:41448 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158036s
	[INFO] 10.244.0.4:38730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084044s
	[INFO] 10.244.0.4:57968 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085926s
	[INFO] 10.244.0.4:42578 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062705s
	[INFO] 10.244.2.2:38441 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139508s
	[INFO] 10.244.2.2:50163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168308s
	[INFO] 10.244.1.2:42467 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125757s
	[INFO] 10.244.1.2:39047 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140115s
	[INFO] 10.244.1.2:37057 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091358s
	[INFO] 10.244.0.4:60045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128601s
	[INFO] 10.244.0.4:32850 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078977s
	[INFO] 10.244.2.2:46995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149775s
	[INFO] 10.244.2.2:60584 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126839s
	[INFO] 10.244.2.2:54400 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169256s
	[INFO] 10.244.1.2:44674 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109219s
	
	
	==> coredns [d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28] <==
	[INFO] 10.244.1.2:52525 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001622123s
	[INFO] 10.244.0.4:60906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156575s
	[INFO] 10.244.0.4:46156 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004679676s
	[INFO] 10.244.0.4:53576 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000235312s
	[INFO] 10.244.0.4:58447 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097688s
	[INFO] 10.244.0.4:60709 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157374s
	[INFO] 10.244.0.4:54900 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012654s
	[INFO] 10.244.0.4:45290 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152164s
	[INFO] 10.244.2.2:52050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184737s
	[INFO] 10.244.2.2:53059 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002292108s
	[INFO] 10.244.2.2:42700 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122981s
	[INFO] 10.244.2.2:44006 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001526846s
	[INFO] 10.244.2.2:41802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169435s
	[INFO] 10.244.1.2:49560 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002026135s
	[INFO] 10.244.1.2:49037 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111642s
	[INFO] 10.244.0.4:56631 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201091s
	[INFO] 10.244.2.2:47071 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000231291s
	[INFO] 10.244.2.2:53040 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132462s
	[INFO] 10.244.1.2:50475 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008294s
	[INFO] 10.244.0.4:60819 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000157328s
	[INFO] 10.244.0.4:41267 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078502s
	[INFO] 10.244.2.2:59469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127405s
	[INFO] 10.244.1.2:46106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125503s
	[INFO] 10.244.1.2:58330 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150547s
	[INFO] 10.244.1.2:40880 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136941s
	
	
	==> describe nodes <==
	Name:               ha-763049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T10_39_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:39:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:47:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:43:08 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:43:08 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:43:08 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:43:08 +0000   Mon, 29 Jul 2024 10:40:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-763049
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03aa097434f1466280c9076799e841fb
	  System UUID:                03aa0974-34f1-4662-80c9-076799e841fb
	  Boot ID:                    efb539a5-e8b0-4a05-a8f7-bc957e281bdb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6s8vm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 coredns-7db6d8ff4d-l4n5p             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m51s
	  kube-system                 coredns-7db6d8ff4d-xxwnd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m51s
	  kube-system                 etcd-ha-763049                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m5s
	  kube-system                 kindnet-fdmh5                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m51s
	  kube-system                 kube-apiserver-ha-763049             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-controller-manager-ha-763049    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-proxy-mhbk7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 kube-scheduler-ha-763049             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-vip-ha-763049                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m49s  kube-proxy       
	  Normal  Starting                 8m5s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m5s   kubelet          Node ha-763049 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m5s   kubelet          Node ha-763049 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m5s   kubelet          Node ha-763049 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m52s  node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal  NodeReady                7m34s  kubelet          Node ha-763049 status is now: NodeReady
	  Normal  RegisteredNode           6m10s  node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal  RegisteredNode           4m53s  node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	
	
	Name:               ha-763049-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_41_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:41:11 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:44:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 10:43:13 +0000   Mon, 29 Jul 2024 10:44:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 10:43:13 +0000   Mon, 29 Jul 2024 10:44:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 10:43:13 +0000   Mon, 29 Jul 2024 10:44:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 10:43:13 +0000   Mon, 29 Jul 2024 10:44:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-763049-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa1e337eb2824257a3354f0f8d3704f1
	  System UUID:                fa1e337e-b282-4257-a335-4f0f8d3704f1
	  Boot ID:                    8c97f7bd-1d8a-4627-9b71-d303a32f0197
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v8wqv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 etcd-ha-763049-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m26s
	  kube-system                 kindnet-596ll                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m28s
	  kube-system                 kube-apiserver-ha-763049-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-controller-manager-ha-763049-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-proxy-tf7wt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-scheduler-ha-763049-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-vip-ha-763049-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m24s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m28s (x8 over 6m28s)  kubelet          Node ha-763049-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s (x8 over 6m28s)  kubelet          Node ha-763049-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s (x7 over 6m28s)  kubelet          Node ha-763049-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m27s                  node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  NodeNotReady             2m53s                  node-controller  Node ha-763049-m02 status is now: NodeNotReady
	
	
	Name:               ha-763049-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_42_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:42:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:47:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:43:28 +0000   Mon, 29 Jul 2024 10:42:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:43:28 +0000   Mon, 29 Jul 2024 10:42:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:43:28 +0000   Mon, 29 Jul 2024 10:42:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:43:28 +0000   Mon, 29 Jul 2024 10:42:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-763049-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c75d5420424454e9176a9ed33c59890
	  System UUID:                3c75d542-0424-454e-9176-a9ed33c59890
	  Boot ID:                    7049dd80-8dc2-4fef-8f1b-67f92b461bf7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bsjch                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 etcd-ha-763049-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m10s
	  kube-system                 kindnet-567mx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m12s
	  kube-system                 kube-apiserver-ha-763049-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-controller-manager-ha-763049-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-proxy-xhcs8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-scheduler-ha-763049-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-vip-ha-763049-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m12s (x8 over 5m12s)  kubelet          Node ha-763049-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m12s (x8 over 5m12s)  kubelet          Node ha-763049-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m12s (x7 over 5m12s)  kubelet          Node ha-763049-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	
	
	Name:               ha-763049-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_43_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:43:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:47:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:44:03 +0000   Mon, 29 Jul 2024 10:43:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:44:03 +0000   Mon, 29 Jul 2024 10:43:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:44:03 +0000   Mon, 29 Jul 2024 10:43:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:44:03 +0000   Mon, 29 Jul 2024 10:43:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-763049-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 656290ccdc044847a0820e68660df2c3
	  System UUID:                656290cc-dc04-4847-a082-0e68660df2c3
	  Boot ID:                    04ec9d00-e96f-4bc7-8146-b4b2850b5c36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fq6mz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m6s
	  kube-system                 kube-proxy-9d6sv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m7s (x2 over 4m7s)  kubelet          Node ha-763049-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x2 over 4m7s)  kubelet          Node ha-763049-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x2 over 4m7s)  kubelet          Node ha-763049-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal  NodeReady                3m46s                kubelet          Node ha-763049-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050772] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040355] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.807627] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul29 10:39] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.616522] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.480584] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.055472] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054857] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.202085] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.132761] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.281350] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.343760] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.067157] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.957567] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +1.681727] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.722604] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.080303] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.543544] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.094019] kauditd_printk_skb: 29 callbacks suppressed
	[Jul29 10:41] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8] <==
	{"level":"warn","ts":"2024-07-29T10:47:39.731117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.787651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.797058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.800944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.812417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.831857Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.839046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.866099Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.873079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.885009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.904024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.914031Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.92219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.923733Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.926414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.929685Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.930954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.938308Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.945495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.952076Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.955968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.960044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.965311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.970687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T10:47:39.976373Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:47:40 up 8 min,  0 users,  load average: 0.09, 0.21, 0.13
	Linux ha-763049 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa] <==
	I0729 10:47:05.645275       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:47:15.640838       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:47:15.640890       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:47:15.641048       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:47:15.641073       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:47:15.641135       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:47:15.641140       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:47:15.641206       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:47:15.641229       1 main.go:299] handling current node
	I0729 10:47:25.648178       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:47:25.648283       1 main.go:299] handling current node
	I0729 10:47:25.648317       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:47:25.648338       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:47:25.648506       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:47:25.648540       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:47:25.648615       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:47:25.648633       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:47:35.645111       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:47:35.645257       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:47:35.645448       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:47:35.645498       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:47:35.645556       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:47:35.645580       1 main.go:299] handling current node
	I0729 10:47:35.645607       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:47:35.645611       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804] <==
	I0729 10:39:33.203550       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0729 10:39:33.211146       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.68]
	I0729 10:39:33.212101       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 10:39:33.220375       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 10:39:33.452319       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 10:39:34.074590       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 10:39:34.103106       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 10:39:34.260447       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 10:39:47.632194       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 10:39:48.232408       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0729 10:43:01.615081       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38810: use of closed network connection
	E0729 10:43:01.802683       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38826: use of closed network connection
	E0729 10:43:02.210291       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38852: use of closed network connection
	E0729 10:43:02.404606       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38864: use of closed network connection
	E0729 10:43:02.598042       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38870: use of closed network connection
	E0729 10:43:02.775338       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38898: use of closed network connection
	E0729 10:43:02.960424       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38908: use of closed network connection
	E0729 10:43:03.134733       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38936: use of closed network connection
	E0729 10:43:03.500029       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38968: use of closed network connection
	E0729 10:43:03.696500       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38984: use of closed network connection
	E0729 10:43:03.895036       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39014: use of closed network connection
	E0729 10:43:04.075453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39034: use of closed network connection
	E0729 10:43:04.292078       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39052: use of closed network connection
	E0729 10:43:04.475009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39070: use of closed network connection
	W0729 10:44:23.228214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.123 192.168.39.68]
	
	
	==> kube-controller-manager [5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d] <==
	I0729 10:42:56.700335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.215551ms"
	I0729 10:42:56.731015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.55052ms"
	I0729 10:42:56.731153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.175µs"
	I0729 10:42:56.820308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.599968ms"
	I0729 10:42:57.060119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="239.672353ms"
	I0729 10:42:57.093257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.062424ms"
	I0729 10:42:57.093547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.558µs"
	I0729 10:42:57.628481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.065µs"
	I0729 10:43:00.653647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.966383ms"
	I0729 10:43:00.653972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.293µs"
	I0729 10:43:00.883824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.583563ms"
	I0729 10:43:00.886531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="190.864µs"
	I0729 10:43:01.176070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.979016ms"
	I0729 10:43:01.176378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.934µs"
	E0729 10:43:32.684447       1 certificate_controller.go:146] Sync csr-j7k5b failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-j7k5b": the object has been modified; please apply your changes to the latest version and try again
	E0729 10:43:32.693102       1 certificate_controller.go:146] Sync csr-j7k5b failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-j7k5b": the object has been modified; please apply your changes to the latest version and try again
	I0729 10:43:32.970908       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-763049-m04\" does not exist"
	I0729 10:43:33.043160       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-763049-m04" podCIDRs=["10.244.3.0/24"]
	E0729 10:43:33.254956       1 daemon_controller.go:324] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"bb8f6b29-8161-4ae0-ab22-08c44d6649ac", ResourceVersion:"979", Generation:1, CreationTimestamp:time.Date(2024, time.July, 29, 10, 39, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\
":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240719-e7903573\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostP
ath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00157a300), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", Vo
lumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00167d368), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1
.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00167d380), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), Down
wardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00167d398), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.IS
CSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Containe
r{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240719-e7903573", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00157a320)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00157a3a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:res
ource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(
*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001e2f7a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001933fa8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ddb780), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, H
ostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00244a440)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001933ff0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on
daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0729 10:43:37.505713       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-763049-m04"
	I0729 10:43:53.587235       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-763049-m04"
	I0729 10:44:46.458598       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-763049-m04"
	I0729 10:44:46.607463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.616045ms"
	I0729 10:44:46.607873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.186µs"
	
	
	==> kube-proxy [db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8] <==
	I0729 10:39:50.501329       1 server_linux.go:69] "Using iptables proxy"
	I0729 10:39:50.522003       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.68"]
	I0729 10:39:50.563814       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 10:39:50.563898       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 10:39:50.563928       1 server_linux.go:165] "Using iptables Proxier"
	I0729 10:39:50.567489       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 10:39:50.568044       1 server.go:872] "Version info" version="v1.30.3"
	I0729 10:39:50.568111       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:39:50.570496       1 config.go:192] "Starting service config controller"
	I0729 10:39:50.570866       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 10:39:50.570939       1 config.go:101] "Starting endpoint slice config controller"
	I0729 10:39:50.570966       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 10:39:50.571980       1 config.go:319] "Starting node config controller"
	I0729 10:39:50.572922       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 10:39:50.671502       1 shared_informer.go:320] Caches are synced for service config
	I0729 10:39:50.671509       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 10:39:50.673320       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3] <==
	W0729 10:39:32.512094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 10:39:32.512143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 10:39:32.523638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 10:39:32.523704       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 10:39:32.555294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 10:39:32.555481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 10:39:32.582700       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 10:39:32.582793       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 10:39:32.598070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 10:39:32.598125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 10:39:32.659733       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 10:39:32.659946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 10:39:32.766696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 10:39:32.766785       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0729 10:39:34.410320       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 10:42:27.739135       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xhcs8\": pod kube-proxy-xhcs8 is already assigned to node \"ha-763049-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xhcs8" node="ha-763049-m03"
	E0729 10:42:27.740202       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xhcs8\": pod kube-proxy-xhcs8 is already assigned to node \"ha-763049-m03\"" pod="kube-system/kube-proxy-xhcs8"
	E0729 10:43:33.078903       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9d6sv\": pod kube-proxy-9d6sv is already assigned to node \"ha-763049-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9d6sv" node="ha-763049-m04"
	E0729 10:43:33.079019       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e99732d0-022f-4401-80cf-44def167bfba(kube-system/kube-proxy-9d6sv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9d6sv"
	E0729 10:43:33.079720       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9d6sv\": pod kube-proxy-9d6sv is already assigned to node \"ha-763049-m04\"" pod="kube-system/kube-proxy-9d6sv"
	I0729 10:43:33.079818       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9d6sv" node="ha-763049-m04"
	E0729 10:43:33.081154       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fq6mz\": pod kindnet-fq6mz is already assigned to node \"ha-763049-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fq6mz" node="ha-763049-m04"
	E0729 10:43:33.081240       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d049f5b4-d534-4e5c-8a0b-8734d15853c5(kube-system/kindnet-fq6mz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fq6mz"
	E0729 10:43:33.081267       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fq6mz\": pod kindnet-fq6mz is already assigned to node \"ha-763049-m04\"" pod="kube-system/kindnet-fq6mz"
	I0729 10:43:33.081293       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fq6mz" node="ha-763049-m04"
	
	
	==> kubelet <==
	Jul 29 10:43:34 ha-763049 kubelet[1375]: E0729 10:43:34.240676    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:43:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:43:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:43:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:43:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:44:34 ha-763049 kubelet[1375]: E0729 10:44:34.241986    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:44:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:44:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:44:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:44:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:45:34 ha-763049 kubelet[1375]: E0729 10:45:34.241995    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:45:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:45:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:45:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:45:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:46:34 ha-763049 kubelet[1375]: E0729 10:46:34.245481    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:46:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:46:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:46:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:46:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:47:34 ha-763049 kubelet[1375]: E0729 10:47:34.241440    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:47:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:47:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:47:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:47:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-763049 -n ha-763049
helpers_test.go:261: (dbg) Run:  kubectl --context ha-763049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (382.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-763049 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-763049 -v=7 --alsologtostderr
E0729 10:48:03.511754   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:48:31.197333   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-763049 -v=7 --alsologtostderr: exit status 82 (2m1.947888087s)

                                                
                                                
-- stdout --
	* Stopping node "ha-763049-m04"  ...
	* Stopping node "ha-763049-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:41.460503   28531 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:41.460614   28531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:41.460623   28531 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:41.460627   28531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:41.460797   28531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:47:41.461043   28531 out.go:298] Setting JSON to false
	I0729 10:47:41.461135   28531 mustload.go:65] Loading cluster: ha-763049
	I0729 10:47:41.461461   28531 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:47:41.461555   28531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:47:41.461728   28531 mustload.go:65] Loading cluster: ha-763049
	I0729 10:47:41.461854   28531 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:47:41.461890   28531 stop.go:39] StopHost: ha-763049-m04
	I0729 10:47:41.462286   28531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:41.462327   28531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:41.478439   28531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0729 10:47:41.478929   28531 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:41.479447   28531 main.go:141] libmachine: Using API Version  1
	I0729 10:47:41.479466   28531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:41.479788   28531 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:41.482279   28531 out.go:177] * Stopping node "ha-763049-m04"  ...
	I0729 10:47:41.483621   28531 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 10:47:41.483668   28531 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:47:41.483927   28531 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 10:47:41.483948   28531 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:47:41.486595   28531 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:41.487046   28531 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:43:20 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:47:41.487087   28531 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:47:41.487438   28531 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:47:41.487620   28531 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:47:41.487798   28531 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:47:41.487935   28531 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:47:41.577657   28531 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 10:47:41.631353   28531 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 10:47:41.687391   28531 main.go:141] libmachine: Stopping "ha-763049-m04"...
	I0729 10:47:41.687445   28531 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:47:41.688990   28531 main.go:141] libmachine: (ha-763049-m04) Calling .Stop
	I0729 10:47:41.692747   28531 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 0/120
	I0729 10:47:42.938109   28531 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:47:42.939994   28531 main.go:141] libmachine: Machine "ha-763049-m04" was stopped.
	I0729 10:47:42.940010   28531 stop.go:75] duration metric: took 1.456393685s to stop
	I0729 10:47:42.940052   28531 stop.go:39] StopHost: ha-763049-m03
	I0729 10:47:42.940339   28531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:42.940376   28531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:42.955144   28531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36775
	I0729 10:47:42.955566   28531 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:42.956066   28531 main.go:141] libmachine: Using API Version  1
	I0729 10:47:42.956090   28531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:42.956393   28531 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:42.958362   28531 out.go:177] * Stopping node "ha-763049-m03"  ...
	I0729 10:47:42.959579   28531 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 10:47:42.959607   28531 main.go:141] libmachine: (ha-763049-m03) Calling .DriverName
	I0729 10:47:42.959833   28531 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 10:47:42.959862   28531 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHHostname
	I0729 10:47:42.962729   28531 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:42.963189   28531 main.go:141] libmachine: (ha-763049-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:4b:ad", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:41:51 +0000 UTC Type:0 Mac:52:54:00:91:4b:ad Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-763049-m03 Clientid:01:52:54:00:91:4b:ad}
	I0729 10:47:42.963220   28531 main.go:141] libmachine: (ha-763049-m03) DBG | domain ha-763049-m03 has defined IP address 192.168.39.123 and MAC address 52:54:00:91:4b:ad in network mk-ha-763049
	I0729 10:47:42.963347   28531 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHPort
	I0729 10:47:42.963529   28531 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHKeyPath
	I0729 10:47:42.963672   28531 main.go:141] libmachine: (ha-763049-m03) Calling .GetSSHUsername
	I0729 10:47:42.963792   28531 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m03/id_rsa Username:docker}
	I0729 10:47:43.049972   28531 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 10:47:43.107088   28531 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 10:47:43.162437   28531 main.go:141] libmachine: Stopping "ha-763049-m03"...
	I0729 10:47:43.162463   28531 main.go:141] libmachine: (ha-763049-m03) Calling .GetState
	I0729 10:47:43.163985   28531 main.go:141] libmachine: (ha-763049-m03) Calling .Stop
	I0729 10:47:43.167139   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 0/120
	I0729 10:47:44.168515   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 1/120
	I0729 10:47:45.169776   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 2/120
	I0729 10:47:46.171079   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 3/120
	I0729 10:47:47.172834   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 4/120
	I0729 10:47:48.174713   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 5/120
	I0729 10:47:49.176430   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 6/120
	I0729 10:47:50.177949   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 7/120
	I0729 10:47:51.179387   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 8/120
	I0729 10:47:52.181022   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 9/120
	I0729 10:47:53.183385   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 10/120
	I0729 10:47:54.185549   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 11/120
	I0729 10:47:55.187105   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 12/120
	I0729 10:47:56.188951   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 13/120
	I0729 10:47:57.190600   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 14/120
	I0729 10:47:58.192585   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 15/120
	I0729 10:47:59.194055   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 16/120
	I0729 10:48:00.195930   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 17/120
	I0729 10:48:01.197619   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 18/120
	I0729 10:48:02.199274   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 19/120
	I0729 10:48:03.201075   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 20/120
	I0729 10:48:04.202725   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 21/120
	I0729 10:48:05.203972   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 22/120
	I0729 10:48:06.205885   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 23/120
	I0729 10:48:07.207327   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 24/120
	I0729 10:48:08.208903   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 25/120
	I0729 10:48:09.210474   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 26/120
	I0729 10:48:10.212122   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 27/120
	I0729 10:48:11.213712   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 28/120
	I0729 10:48:12.215281   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 29/120
	I0729 10:48:13.217077   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 30/120
	I0729 10:48:14.218736   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 31/120
	I0729 10:48:15.220088   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 32/120
	I0729 10:48:16.221908   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 33/120
	I0729 10:48:17.223266   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 34/120
	I0729 10:48:18.224982   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 35/120
	I0729 10:48:19.226398   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 36/120
	I0729 10:48:20.228338   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 37/120
	I0729 10:48:21.229704   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 38/120
	I0729 10:48:22.231143   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 39/120
	I0729 10:48:23.233002   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 40/120
	I0729 10:48:24.234377   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 41/120
	I0729 10:48:25.235797   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 42/120
	I0729 10:48:26.237149   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 43/120
	I0729 10:48:27.238857   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 44/120
	I0729 10:48:28.240705   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 45/120
	I0729 10:48:29.242288   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 46/120
	I0729 10:48:30.243838   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 47/120
	I0729 10:48:31.245216   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 48/120
	I0729 10:48:32.246980   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 49/120
	I0729 10:48:33.248951   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 50/120
	I0729 10:48:34.250753   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 51/120
	I0729 10:48:35.252473   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 52/120
	I0729 10:48:36.253844   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 53/120
	I0729 10:48:37.255301   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 54/120
	I0729 10:48:38.257236   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 55/120
	I0729 10:48:39.258660   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 56/120
	I0729 10:48:40.260485   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 57/120
	I0729 10:48:41.261771   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 58/120
	I0729 10:48:42.263474   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 59/120
	I0729 10:48:43.265317   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 60/120
	I0729 10:48:44.266853   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 61/120
	I0729 10:48:45.268127   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 62/120
	I0729 10:48:46.269464   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 63/120
	I0729 10:48:47.270893   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 64/120
	I0729 10:48:48.272775   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 65/120
	I0729 10:48:49.274454   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 66/120
	I0729 10:48:50.275914   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 67/120
	I0729 10:48:51.277160   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 68/120
	I0729 10:48:52.278636   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 69/120
	I0729 10:48:53.280389   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 70/120
	I0729 10:48:54.281824   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 71/120
	I0729 10:48:55.283223   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 72/120
	I0729 10:48:56.285140   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 73/120
	I0729 10:48:57.286317   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 74/120
	I0729 10:48:58.288011   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 75/120
	I0729 10:48:59.289434   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 76/120
	I0729 10:49:00.290749   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 77/120
	I0729 10:49:01.292220   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 78/120
	I0729 10:49:02.293598   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 79/120
	I0729 10:49:03.295037   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 80/120
	I0729 10:49:04.297102   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 81/120
	I0729 10:49:05.298405   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 82/120
	I0729 10:49:06.299949   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 83/120
	I0729 10:49:07.301359   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 84/120
	I0729 10:49:08.303192   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 85/120
	I0729 10:49:09.304643   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 86/120
	I0729 10:49:10.306420   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 87/120
	I0729 10:49:11.307661   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 88/120
	I0729 10:49:12.309998   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 89/120
	I0729 10:49:13.311549   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 90/120
	I0729 10:49:14.313242   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 91/120
	I0729 10:49:15.314734   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 92/120
	I0729 10:49:16.316704   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 93/120
	I0729 10:49:17.318057   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 94/120
	I0729 10:49:18.319845   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 95/120
	I0729 10:49:19.321165   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 96/120
	I0729 10:49:20.322737   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 97/120
	I0729 10:49:21.323931   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 98/120
	I0729 10:49:22.325383   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 99/120
	I0729 10:49:23.327398   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 100/120
	I0729 10:49:24.329494   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 101/120
	I0729 10:49:25.330848   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 102/120
	I0729 10:49:26.332316   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 103/120
	I0729 10:49:27.333695   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 104/120
	I0729 10:49:28.335909   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 105/120
	I0729 10:49:29.337223   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 106/120
	I0729 10:49:30.338533   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 107/120
	I0729 10:49:31.340102   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 108/120
	I0729 10:49:32.341550   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 109/120
	I0729 10:49:33.343484   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 110/120
	I0729 10:49:34.345189   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 111/120
	I0729 10:49:35.346562   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 112/120
	I0729 10:49:36.348148   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 113/120
	I0729 10:49:37.349504   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 114/120
	I0729 10:49:38.351480   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 115/120
	I0729 10:49:39.353009   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 116/120
	I0729 10:49:40.354580   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 117/120
	I0729 10:49:41.355980   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 118/120
	I0729 10:49:42.357226   28531 main.go:141] libmachine: (ha-763049-m03) Waiting for machine to stop 119/120
	I0729 10:49:43.358101   28531 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 10:49:43.358169   28531 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 10:49:43.360115   28531 out.go:177] 
	W0729 10:49:43.361862   28531 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 10:49:43.361879   28531 out.go:239] * 
	* 
	W0729 10:49:43.364135   28531 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:49:43.365577   28531 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-763049 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-763049 --wait=true -v=7 --alsologtostderr
E0729 10:49:57.915732   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:51:20.967591   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:53:03.511322   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-763049 --wait=true -v=7 --alsologtostderr: (4m17.644123693s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-763049
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-763049 -n ha-763049
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-763049 logs -n 25: (1.900567344s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m02:/home/docker/cp-test_ha-763049-m03_ha-763049-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m02 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m03_ha-763049-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04:/home/docker/cp-test_ha-763049-m03_ha-763049-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m04 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m03_ha-763049-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp testdata/cp-test.txt                                               | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile250926512/001/cp-test_ha-763049-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049:/home/docker/cp-test_ha-763049-m04_ha-763049.txt                      |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049 sudo cat                                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049.txt                                |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m02:/home/docker/cp-test_ha-763049-m04_ha-763049-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m02 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03:/home/docker/cp-test_ha-763049-m04_ha-763049-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m03 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-763049 node stop m02 -v=7                                                    | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-763049 node start m02 -v=7                                                   | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-763049 -v=7                                                          | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-763049 -v=7                                                               | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-763049 --wait=true -v=7                                                   | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:49 UTC | 29 Jul 24 10:54 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-763049                                                               | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:54 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:49:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:49:43.408985   29021 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:49:43.409270   29021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:49:43.409284   29021 out.go:304] Setting ErrFile to fd 2...
	I0729 10:49:43.409289   29021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:49:43.409507   29021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:49:43.410079   29021 out.go:298] Setting JSON to false
	I0729 10:49:43.411085   29021 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1929,"bootTime":1722248254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:49:43.411154   29021 start.go:139] virtualization: kvm guest
	I0729 10:49:43.413383   29021 out.go:177] * [ha-763049] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:49:43.414957   29021 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:49:43.414960   29021 notify.go:220] Checking for updates...
	I0729 10:49:43.416388   29021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:49:43.418001   29021 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:49:43.419182   29021 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:49:43.420413   29021 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 10:49:43.421713   29021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:49:43.423461   29021 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:49:43.423569   29021 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:49:43.424019   29021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:49:43.424091   29021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:49:43.438976   29021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I0729 10:49:43.439378   29021 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:49:43.440054   29021 main.go:141] libmachine: Using API Version  1
	I0729 10:49:43.440083   29021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:49:43.440513   29021 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:49:43.440798   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:49:43.475335   29021 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 10:49:43.476670   29021 start.go:297] selected driver: kvm2
	I0729 10:49:43.476684   29021 start.go:901] validating driver "kvm2" against &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:49:43.476809   29021 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:49:43.477189   29021 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:49:43.477304   29021 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:49:43.492313   29021 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:49:43.492949   29021 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:49:43.493006   29021 cni.go:84] Creating CNI manager for ""
	I0729 10:49:43.493019   29021 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 10:49:43.493075   29021 start.go:340] cluster config:
	{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:49:43.493198   29021 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:49:43.495666   29021 out.go:177] * Starting "ha-763049" primary control-plane node in "ha-763049" cluster
	I0729 10:49:43.497054   29021 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:49:43.497105   29021 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:49:43.497130   29021 cache.go:56] Caching tarball of preloaded images
	I0729 10:49:43.497237   29021 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:49:43.497252   29021 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:49:43.497375   29021 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:49:43.497619   29021 start.go:360] acquireMachinesLock for ha-763049: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:49:43.497690   29021 start.go:364] duration metric: took 42.837µs to acquireMachinesLock for "ha-763049"
	I0729 10:49:43.497712   29021 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:49:43.497721   29021 fix.go:54] fixHost starting: 
	I0729 10:49:43.497998   29021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:49:43.498038   29021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:49:43.512187   29021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
	I0729 10:49:43.512573   29021 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:49:43.513081   29021 main.go:141] libmachine: Using API Version  1
	I0729 10:49:43.513109   29021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:49:43.513503   29021 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:49:43.513704   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:49:43.513889   29021 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:49:43.515399   29021 fix.go:112] recreateIfNeeded on ha-763049: state=Running err=<nil>
	W0729 10:49:43.515434   29021 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:49:43.517293   29021 out.go:177] * Updating the running kvm2 "ha-763049" VM ...
	I0729 10:49:43.518507   29021 machine.go:94] provisionDockerMachine start ...
	I0729 10:49:43.518523   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:49:43.518733   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:43.521014   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.521521   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:43.521544   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.521701   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:49:43.521877   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.522073   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.522214   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:49:43.522394   29021 main.go:141] libmachine: Using SSH client type: native
	I0729 10:49:43.522574   29021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:49:43.522584   29021 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 10:49:43.631834   29021 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-763049
	
	I0729 10:49:43.631857   29021 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:49:43.632087   29021 buildroot.go:166] provisioning hostname "ha-763049"
	I0729 10:49:43.632112   29021 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:49:43.632338   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:43.634879   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.635224   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:43.635251   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.635423   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:49:43.635599   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.635811   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.635970   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:49:43.636167   29021 main.go:141] libmachine: Using SSH client type: native
	I0729 10:49:43.636337   29021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:49:43.636350   29021 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-763049 && echo "ha-763049" | sudo tee /etc/hostname
	I0729 10:49:43.763357   29021 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-763049
	
	I0729 10:49:43.763389   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:43.766256   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.766622   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:43.766645   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.766846   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:49:43.767049   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.767202   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.767343   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:49:43.767509   29021 main.go:141] libmachine: Using SSH client type: native
	I0729 10:49:43.767713   29021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:49:43.767737   29021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-763049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-763049/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-763049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:49:43.872103   29021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:49:43.872136   29021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 10:49:43.872177   29021 buildroot.go:174] setting up certificates
	I0729 10:49:43.872188   29021 provision.go:84] configureAuth start
	I0729 10:49:43.872200   29021 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:49:43.872475   29021 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:49:43.875020   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.875413   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:43.875436   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.875550   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:43.877616   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.877953   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:43.877972   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.878188   29021 provision.go:143] copyHostCerts
	I0729 10:49:43.878219   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:49:43.878250   29021 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 10:49:43.878259   29021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:49:43.878325   29021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 10:49:43.878422   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:49:43.878440   29021 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 10:49:43.878444   29021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:49:43.878468   29021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 10:49:43.878521   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:49:43.878541   29021 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 10:49:43.878547   29021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:49:43.878569   29021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 10:49:43.878628   29021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.ha-763049 san=[127.0.0.1 192.168.39.68 ha-763049 localhost minikube]
	I0729 10:49:44.003044   29021 provision.go:177] copyRemoteCerts
	I0729 10:49:44.003109   29021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:49:44.003138   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:44.006115   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:44.006649   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:44.006680   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:44.006927   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:49:44.007122   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:44.007323   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:49:44.007508   29021 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:49:44.089699   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 10:49:44.089778   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:49:44.117665   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 10:49:44.117740   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 10:49:44.145917   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 10:49:44.145990   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:49:44.178252   29021 provision.go:87] duration metric: took 306.049751ms to configureAuth
	I0729 10:49:44.178284   29021 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:49:44.178502   29021 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:49:44.178565   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:44.181419   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:44.181847   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:44.181891   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:44.182100   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:49:44.182278   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:44.182426   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:44.182537   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:49:44.182692   29021 main.go:141] libmachine: Using SSH client type: native
	I0729 10:49:44.182904   29021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:49:44.182923   29021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:51:14.997743   29021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:51:14.997772   29021 machine.go:97] duration metric: took 1m31.479254237s to provisionDockerMachine
	I0729 10:51:14.997783   29021 start.go:293] postStartSetup for "ha-763049" (driver="kvm2")
	I0729 10:51:14.997794   29021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:51:14.997809   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:14.998183   29021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:51:14.998219   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:51:15.001685   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.002160   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.002186   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.002325   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:51:15.002518   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.002720   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:51:15.002846   29021 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:51:15.086913   29021 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:51:15.091315   29021 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:51:15.091338   29021 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 10:51:15.091395   29021 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 10:51:15.091475   29021 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 10:51:15.091486   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /etc/ssl/certs/110642.pem
	I0729 10:51:15.091574   29021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:51:15.101166   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:51:15.129390   29021 start.go:296] duration metric: took 131.592211ms for postStartSetup
	I0729 10:51:15.129434   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:15.129736   29021 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 10:51:15.129761   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:51:15.132347   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.132701   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.132726   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.132859   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:51:15.133023   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.133202   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:51:15.133393   29021 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	W0729 10:51:15.217266   29021 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 10:51:15.217293   29021 fix.go:56] duration metric: took 1m31.719573082s for fixHost
	I0729 10:51:15.217317   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:51:15.219860   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.220222   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.220239   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.220435   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:51:15.220645   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.220865   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.221001   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:51:15.221210   29021 main.go:141] libmachine: Using SSH client type: native
	I0729 10:51:15.221369   29021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:51:15.221379   29021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:51:15.323975   29021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722250275.289847121
	
	I0729 10:51:15.324004   29021 fix.go:216] guest clock: 1722250275.289847121
	I0729 10:51:15.324015   29021 fix.go:229] Guest: 2024-07-29 10:51:15.289847121 +0000 UTC Remote: 2024-07-29 10:51:15.21730072 +0000 UTC m=+91.842665800 (delta=72.546401ms)
	I0729 10:51:15.324042   29021 fix.go:200] guest clock delta is within tolerance: 72.546401ms
	I0729 10:51:15.324047   29021 start.go:83] releasing machines lock for "ha-763049", held for 1m31.826344747s
	I0729 10:51:15.324081   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:15.324369   29021 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:51:15.326727   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.327061   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.327084   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.327229   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:15.327746   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:15.327955   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:15.328073   29021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:51:15.328116   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:51:15.328156   29021 ssh_runner.go:195] Run: cat /version.json
	I0729 10:51:15.328175   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:51:15.330765   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.331029   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.331195   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.331219   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.331332   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:51:15.331445   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.331465   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.331522   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.331672   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:51:15.331690   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:51:15.331853   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.331877   29021 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:51:15.332014   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:51:15.332166   29021 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:51:15.432315   29021 ssh_runner.go:195] Run: systemctl --version
	I0729 10:51:15.438800   29021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:51:15.600587   29021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:51:15.612066   29021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:51:15.612139   29021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:51:15.623504   29021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 10:51:15.623531   29021 start.go:495] detecting cgroup driver to use...
	I0729 10:51:15.623591   29021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:51:15.641036   29021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:51:15.656481   29021 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:51:15.656547   29021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:51:15.671820   29021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:51:15.687239   29021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:51:15.849582   29021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:51:16.001986   29021 docker.go:233] disabling docker service ...
	I0729 10:51:16.002044   29021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:51:16.019870   29021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:51:16.035215   29021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:51:16.197283   29021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:51:16.347969   29021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:51:16.363148   29021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:51:16.383050   29021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:51:16.383116   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.395365   29021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:51:16.395446   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.408575   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.420122   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.431743   29021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:51:16.443214   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.454061   29021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.464925   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.476057   29021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:51:16.486249   29021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:51:16.496322   29021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:51:16.643776   29021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:51:25.432305   29021 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.788500054s)
	I0729 10:51:25.432335   29021 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:51:25.432395   29021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:51:25.437669   29021 start.go:563] Will wait 60s for crictl version
	I0729 10:51:25.437728   29021 ssh_runner.go:195] Run: which crictl
	I0729 10:51:25.441709   29021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:51:25.482903   29021 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:51:25.482984   29021 ssh_runner.go:195] Run: crio --version
	I0729 10:51:25.513327   29021 ssh_runner.go:195] Run: crio --version
	I0729 10:51:25.546814   29021 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:51:25.548257   29021 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:51:25.550986   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:25.551437   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:25.551469   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:25.551675   29021 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:51:25.556784   29021 kubeadm.go:883] updating cluster {Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 10:51:25.556954   29021 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:51:25.557015   29021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:51:25.605052   29021 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 10:51:25.605074   29021 crio.go:433] Images already preloaded, skipping extraction
	I0729 10:51:25.605146   29021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:51:25.644156   29021 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 10:51:25.644176   29021 cache_images.go:84] Images are preloaded, skipping loading
	I0729 10:51:25.644188   29021 kubeadm.go:934] updating node { 192.168.39.68 8443 v1.30.3 crio true true} ...
	I0729 10:51:25.644314   29021 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-763049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:51:25.644391   29021 ssh_runner.go:195] Run: crio config
	I0729 10:51:25.698281   29021 cni.go:84] Creating CNI manager for ""
	I0729 10:51:25.698298   29021 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 10:51:25.698309   29021 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:51:25.698332   29021 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-763049 NodeName:ha-763049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:51:25.698474   29021 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-763049"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:51:25.698493   29021 kube-vip.go:115] generating kube-vip config ...
	I0729 10:51:25.698542   29021 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 10:51:25.710759   29021 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 10:51:25.710873   29021 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 10:51:25.710931   29021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:51:25.720497   29021 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:51:25.720550   29021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 10:51:25.730153   29021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0729 10:51:25.746857   29021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:51:25.763705   29021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0729 10:51:25.781921   29021 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 10:51:25.800129   29021 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 10:51:25.806251   29021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:51:25.951402   29021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:51:25.967619   29021 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049 for IP: 192.168.39.68
	I0729 10:51:25.967643   29021 certs.go:194] generating shared ca certs ...
	I0729 10:51:25.967660   29021 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:51:25.967862   29021 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 10:51:25.967916   29021 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 10:51:25.967930   29021 certs.go:256] generating profile certs ...
	I0729 10:51:25.968023   29021 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key
	I0729 10:51:25.968056   29021 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.54ec16aa
	I0729 10:51:25.968079   29021 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.54ec16aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.39 192.168.39.123 192.168.39.254]
	I0729 10:51:26.180556   29021 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.54ec16aa ...
	I0729 10:51:26.180591   29021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.54ec16aa: {Name:mk6a9fc39645df1b02a8b78f419d064af5259f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:51:26.180829   29021 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.54ec16aa ...
	I0729 10:51:26.180848   29021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.54ec16aa: {Name:mk546957a28a9db418ab3f39b372c82f974b2492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:51:26.180955   29021 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.54ec16aa -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt
	I0729 10:51:26.181187   29021 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.54ec16aa -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key
	I0729 10:51:26.181384   29021 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key
	I0729 10:51:26.181405   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 10:51:26.181424   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 10:51:26.181446   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 10:51:26.181467   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 10:51:26.181487   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 10:51:26.181504   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 10:51:26.181524   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 10:51:26.181544   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 10:51:26.181621   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 10:51:26.181689   29021 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 10:51:26.181713   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:51:26.181754   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:51:26.181799   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:51:26.181826   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 10:51:26.181870   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:51:26.181902   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /usr/share/ca-certificates/110642.pem
	I0729 10:51:26.181918   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:51:26.181933   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem -> /usr/share/ca-certificates/11064.pem
	I0729 10:51:26.182558   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:51:26.208690   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:51:26.233466   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:51:26.258571   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:51:26.283391   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 10:51:26.307793   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 10:51:26.335993   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:51:26.361125   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:51:26.387309   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 10:51:26.412177   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:51:26.436000   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 10:51:26.460054   29021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:51:26.476879   29021 ssh_runner.go:195] Run: openssl version
	I0729 10:51:26.482974   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 10:51:26.495340   29021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 10:51:26.499977   29021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 10:51:26.500031   29021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 10:51:26.505960   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:51:26.516412   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:51:26.527968   29021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:51:26.533331   29021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:51:26.533421   29021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:51:26.539370   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:51:26.549548   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 10:51:26.561288   29021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 10:51:26.566383   29021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 10:51:26.566455   29021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 10:51:26.572999   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 10:51:26.583738   29021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:51:26.589031   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 10:51:26.594912   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 10:51:26.600785   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 10:51:26.606555   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 10:51:26.613016   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 10:51:26.618630   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 10:51:26.624347   29021 kubeadm.go:392] StartCluster: {Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:51:26.624440   29021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 10:51:26.624484   29021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 10:51:26.668986   29021 cri.go:89] found id: "4b419adf4fad5381eaa65217b85d9a35efd45639dc697b36bdebf94a0832fec8"
	I0729 10:51:26.669010   29021 cri.go:89] found id: "b062f7f05731f65d65d798709f57fe001b52fd30f1270d06e7e8719d12006bc5"
	I0729 10:51:26.669016   29021 cri.go:89] found id: "8bd4ba6b4b03e1d22280d32240e644343933942733ec2790b4c7fa429beecf53"
	I0729 10:51:26.669020   29021 cri.go:89] found id: "5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5"
	I0729 10:51:26.669025   29021 cri.go:89] found id: "d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28"
	I0729 10:51:26.669030   29021 cri.go:89] found id: "752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47"
	I0729 10:51:26.669032   29021 cri.go:89] found id: "d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa"
	I0729 10:51:26.669035   29021 cri.go:89] found id: "db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8"
	I0729 10:51:26.669037   29021 cri.go:89] found id: "25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9"
	I0729 10:51:26.669043   29021 cri.go:89] found id: "46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8"
	I0729 10:51:26.669046   29021 cri.go:89] found id: "c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3"
	I0729 10:51:26.669048   29021 cri.go:89] found id: "e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804"
	I0729 10:51:26.669051   29021 cri.go:89] found id: "5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d"
	I0729 10:51:26.669054   29021 cri.go:89] found id: ""
	I0729 10:51:26.669096   29021 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.729002485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250441728969265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1a8b381-e62e-4375-b54f-0520a5a2dfbe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.729682551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9650e38-57f4-4684-9cf9-52860cb53f1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.729822925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9650e38-57f4-4684-9cf9-52860cb53f1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.730269763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:431b340150506496903eb417e75c50c78ae3187fa69a468338efb2944fe1d43b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722250374257341609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09674fa79f4c54a1bfa82578aab9e18a1d7c138f0df16c700103bc57444b8338,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722250334248363965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b20547718a252aad997ed2c0df2cae435f48691718e9f4446ab41c7fefc3519,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722250333234561159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722250331232921136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873252844d658078505b8e38cf730f312ac6550ffd57dd796992a5f162ca4534,PodSandboxId:c0bdf4544262177797e9def41310b74b83a8fed37aa0dee26d44b2da3f1e3488,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722250325527581513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd55ad825ed2ddcac9a7df2d88feb40e8e07e3a31b4cb6e4977f437446654b7,PodSandboxId:356d97ec95395ab230bc31eb363573ffb7b583c349211e50763ef16c3192d26f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722250306123883816,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51297b7fbd0a51dd85a39ef1bcc68a3d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449897d6adc73eae51d5ff58c89a89fb21ef06566ca3706ebf539c2e1db07b7,PodSandboxId:5f62a1e4a4b35d2ce119595870f31e5939bf73f3fda1afaec9010f2c11534729,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722250293512694427,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8a03b1375072888e734a6644c6bad91c3d71fc0f679dbf3ffa29a8e69acf7645,PodSandboxId:229b61b75dafc1d4d22a7754b2b28bdf994bb033f45b30ac31ca37488497144f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722250292474541319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2916a63d
5ed377ceeacdd29b537026dd25dbd2c1ec74e8ba3893a3890e664348,PodSandboxId:44713f32a945a1e930a5899b608fb85f67501c23dc145a18c3879e8fcefefffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292353385648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838e0d2cb45ba65482ead03da1dbbe66eb325461f487e965da0988cb30e34825,PodSandboxId:d80cc6c195465d37ccf6d82b1db7afca0f85c84ff2bfa863726f4a70d3fb231b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292326267190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7396dba0659812fa98e24a3bd4cf548afa87c5e6c85d3be56f89719d5fad6177,PodSandboxId:8fbeb2be906091a7e3fcd3e31e9df405beaa03e87a4d33fea6e926d19e7c3ab1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722250292134340916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a72aa191af240183804ae6c9a099f1718531bb8abe5eb81b8d356a35b4d93cb,PodSandboxId:8abf83da8b028164227844ed02251b0b5e140350b923ae5259d18cf1e80cbad2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722250292021082529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407f
b9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc91bcec9ad6c6a181434711095acd4a32d53d7c61281c2221fafbfcb2d88c8,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722250292162735878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722250292048328822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722249780144234457,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annot
ations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606596651642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606590934622,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722249594537412852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722249590199147101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722249568437102383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722249568433332037,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9650e38-57f4-4684-9cf9-52860cb53f1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.779500346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=216b11d6-ed86-45b1-bdb6-d53fb9606875 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.779602100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=216b11d6-ed86-45b1-bdb6-d53fb9606875 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.781081793Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2660ceb-99e6-4bd6-8a6c-5ac38d805586 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.781521870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250441781497756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2660ceb-99e6-4bd6-8a6c-5ac38d805586 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.782295921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4c56c95-321e-4543-94a1-55553b8b89e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.782372679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4c56c95-321e-4543-94a1-55553b8b89e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.782863432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:431b340150506496903eb417e75c50c78ae3187fa69a468338efb2944fe1d43b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722250374257341609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09674fa79f4c54a1bfa82578aab9e18a1d7c138f0df16c700103bc57444b8338,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722250334248363965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b20547718a252aad997ed2c0df2cae435f48691718e9f4446ab41c7fefc3519,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722250333234561159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722250331232921136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873252844d658078505b8e38cf730f312ac6550ffd57dd796992a5f162ca4534,PodSandboxId:c0bdf4544262177797e9def41310b74b83a8fed37aa0dee26d44b2da3f1e3488,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722250325527581513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd55ad825ed2ddcac9a7df2d88feb40e8e07e3a31b4cb6e4977f437446654b7,PodSandboxId:356d97ec95395ab230bc31eb363573ffb7b583c349211e50763ef16c3192d26f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722250306123883816,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51297b7fbd0a51dd85a39ef1bcc68a3d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449897d6adc73eae51d5ff58c89a89fb21ef06566ca3706ebf539c2e1db07b7,PodSandboxId:5f62a1e4a4b35d2ce119595870f31e5939bf73f3fda1afaec9010f2c11534729,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722250293512694427,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8a03b1375072888e734a6644c6bad91c3d71fc0f679dbf3ffa29a8e69acf7645,PodSandboxId:229b61b75dafc1d4d22a7754b2b28bdf994bb033f45b30ac31ca37488497144f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722250292474541319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2916a63d
5ed377ceeacdd29b537026dd25dbd2c1ec74e8ba3893a3890e664348,PodSandboxId:44713f32a945a1e930a5899b608fb85f67501c23dc145a18c3879e8fcefefffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292353385648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838e0d2cb45ba65482ead03da1dbbe66eb325461f487e965da0988cb30e34825,PodSandboxId:d80cc6c195465d37ccf6d82b1db7afca0f85c84ff2bfa863726f4a70d3fb231b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292326267190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7396dba0659812fa98e24a3bd4cf548afa87c5e6c85d3be56f89719d5fad6177,PodSandboxId:8fbeb2be906091a7e3fcd3e31e9df405beaa03e87a4d33fea6e926d19e7c3ab1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722250292134340916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a72aa191af240183804ae6c9a099f1718531bb8abe5eb81b8d356a35b4d93cb,PodSandboxId:8abf83da8b028164227844ed02251b0b5e140350b923ae5259d18cf1e80cbad2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722250292021082529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407f
b9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc91bcec9ad6c6a181434711095acd4a32d53d7c61281c2221fafbfcb2d88c8,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722250292162735878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722250292048328822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722249780144234457,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annot
ations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606596651642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606590934622,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722249594537412852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722249590199147101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722249568437102383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722249568433332037,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4c56c95-321e-4543-94a1-55553b8b89e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.834927315Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68edb810-58e6-4243-be80-2b45aec8dc12 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.835029106Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68edb810-58e6-4243-be80-2b45aec8dc12 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.837410544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=551a3f9e-6253-4dbb-8e04-023e1b294ded name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.840157079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250441840118869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=551a3f9e-6253-4dbb-8e04-023e1b294ded name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.844370120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fca6e564-1fe7-460d-94aa-e068c38e4a0f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.844440268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fca6e564-1fe7-460d-94aa-e068c38e4a0f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.845056822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:431b340150506496903eb417e75c50c78ae3187fa69a468338efb2944fe1d43b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722250374257341609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09674fa79f4c54a1bfa82578aab9e18a1d7c138f0df16c700103bc57444b8338,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722250334248363965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b20547718a252aad997ed2c0df2cae435f48691718e9f4446ab41c7fefc3519,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722250333234561159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722250331232921136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873252844d658078505b8e38cf730f312ac6550ffd57dd796992a5f162ca4534,PodSandboxId:c0bdf4544262177797e9def41310b74b83a8fed37aa0dee26d44b2da3f1e3488,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722250325527581513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd55ad825ed2ddcac9a7df2d88feb40e8e07e3a31b4cb6e4977f437446654b7,PodSandboxId:356d97ec95395ab230bc31eb363573ffb7b583c349211e50763ef16c3192d26f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722250306123883816,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51297b7fbd0a51dd85a39ef1bcc68a3d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449897d6adc73eae51d5ff58c89a89fb21ef06566ca3706ebf539c2e1db07b7,PodSandboxId:5f62a1e4a4b35d2ce119595870f31e5939bf73f3fda1afaec9010f2c11534729,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722250293512694427,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8a03b1375072888e734a6644c6bad91c3d71fc0f679dbf3ffa29a8e69acf7645,PodSandboxId:229b61b75dafc1d4d22a7754b2b28bdf994bb033f45b30ac31ca37488497144f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722250292474541319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2916a63d
5ed377ceeacdd29b537026dd25dbd2c1ec74e8ba3893a3890e664348,PodSandboxId:44713f32a945a1e930a5899b608fb85f67501c23dc145a18c3879e8fcefefffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292353385648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838e0d2cb45ba65482ead03da1dbbe66eb325461f487e965da0988cb30e34825,PodSandboxId:d80cc6c195465d37ccf6d82b1db7afca0f85c84ff2bfa863726f4a70d3fb231b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292326267190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7396dba0659812fa98e24a3bd4cf548afa87c5e6c85d3be56f89719d5fad6177,PodSandboxId:8fbeb2be906091a7e3fcd3e31e9df405beaa03e87a4d33fea6e926d19e7c3ab1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722250292134340916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a72aa191af240183804ae6c9a099f1718531bb8abe5eb81b8d356a35b4d93cb,PodSandboxId:8abf83da8b028164227844ed02251b0b5e140350b923ae5259d18cf1e80cbad2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722250292021082529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407f
b9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc91bcec9ad6c6a181434711095acd4a32d53d7c61281c2221fafbfcb2d88c8,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722250292162735878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722250292048328822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722249780144234457,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annot
ations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606596651642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606590934622,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722249594537412852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722249590199147101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722249568437102383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722249568433332037,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fca6e564-1fe7-460d-94aa-e068c38e4a0f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.922981598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97ec2cf6-1a59-4450-8e9a-fbfdf4a0dc1d name=/runtime.v1.RuntimeService/Version
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.923083160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97ec2cf6-1a59-4450-8e9a-fbfdf4a0dc1d name=/runtime.v1.RuntimeService/Version
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.924228720Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a89a130-66f9-428b-a0a4-b2363686ccc9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.925224571Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250441925199641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a89a130-66f9-428b-a0a4-b2363686ccc9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.925970618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a53aa0f0-f5b3-4fc2-9a45-7cb3ba760bc4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.926051768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a53aa0f0-f5b3-4fc2-9a45-7cb3ba760bc4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:54:01 ha-763049 crio[3746]: time="2024-07-29 10:54:01.926515366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:431b340150506496903eb417e75c50c78ae3187fa69a468338efb2944fe1d43b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722250374257341609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09674fa79f4c54a1bfa82578aab9e18a1d7c138f0df16c700103bc57444b8338,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722250334248363965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b20547718a252aad997ed2c0df2cae435f48691718e9f4446ab41c7fefc3519,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722250333234561159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722250331232921136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873252844d658078505b8e38cf730f312ac6550ffd57dd796992a5f162ca4534,PodSandboxId:c0bdf4544262177797e9def41310b74b83a8fed37aa0dee26d44b2da3f1e3488,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722250325527581513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd55ad825ed2ddcac9a7df2d88feb40e8e07e3a31b4cb6e4977f437446654b7,PodSandboxId:356d97ec95395ab230bc31eb363573ffb7b583c349211e50763ef16c3192d26f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722250306123883816,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51297b7fbd0a51dd85a39ef1bcc68a3d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449897d6adc73eae51d5ff58c89a89fb21ef06566ca3706ebf539c2e1db07b7,PodSandboxId:5f62a1e4a4b35d2ce119595870f31e5939bf73f3fda1afaec9010f2c11534729,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722250293512694427,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8a03b1375072888e734a6644c6bad91c3d71fc0f679dbf3ffa29a8e69acf7645,PodSandboxId:229b61b75dafc1d4d22a7754b2b28bdf994bb033f45b30ac31ca37488497144f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722250292474541319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2916a63d
5ed377ceeacdd29b537026dd25dbd2c1ec74e8ba3893a3890e664348,PodSandboxId:44713f32a945a1e930a5899b608fb85f67501c23dc145a18c3879e8fcefefffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292353385648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838e0d2cb45ba65482ead03da1dbbe66eb325461f487e965da0988cb30e34825,PodSandboxId:d80cc6c195465d37ccf6d82b1db7afca0f85c84ff2bfa863726f4a70d3fb231b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292326267190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7396dba0659812fa98e24a3bd4cf548afa87c5e6c85d3be56f89719d5fad6177,PodSandboxId:8fbeb2be906091a7e3fcd3e31e9df405beaa03e87a4d33fea6e926d19e7c3ab1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722250292134340916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a72aa191af240183804ae6c9a099f1718531bb8abe5eb81b8d356a35b4d93cb,PodSandboxId:8abf83da8b028164227844ed02251b0b5e140350b923ae5259d18cf1e80cbad2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722250292021082529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407f
b9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc91bcec9ad6c6a181434711095acd4a32d53d7c61281c2221fafbfcb2d88c8,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722250292162735878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722250292048328822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722249780144234457,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annot
ations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606596651642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606590934622,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722249594537412852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722249590199147101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722249568437102383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722249568433332037,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a53aa0f0-f5b3-4fc2-9a45-7cb3ba760bc4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	431b340150506       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   efa540dfb2986       storage-provisioner
	09674fa79f4c5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   2f8e10cced82e       kube-controller-manager-ha-763049
	5b20547718a25       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   aadb979b81e36       kube-apiserver-ha-763049
	381606df0bdae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   efa540dfb2986       storage-provisioner
	873252844d658       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   c0bdf45442621       busybox-fc5497c4f-6s8vm
	6cd55ad825ed2       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   356d97ec95395       kube-vip-ha-763049
	e449897d6adc7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   5f62a1e4a4b35       kube-proxy-mhbk7
	8a03b13750728       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   229b61b75dafc       kindnet-fdmh5
	2916a63d5ed37       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   44713f32a945a       coredns-7db6d8ff4d-xxwnd
	838e0d2cb45ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   d80cc6c195465       coredns-7db6d8ff4d-l4n5p
	dbc91bcec9ad6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   aadb979b81e36       kube-apiserver-ha-763049
	7396dba065981       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   8fbeb2be90609       etcd-ha-763049
	e87f4671f1e6a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   2f8e10cced82e       kube-controller-manager-ha-763049
	3a72aa191af24       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   8abf83da8b028       kube-scheduler-ha-763049
	b1cbf3ef31451       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   317257a7e3939       busybox-fc5497c4f-6s8vm
	5d7c5ba61589d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   83fa37df3ce80       coredns-7db6d8ff4d-xxwnd
	d2f12f3773838       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   1b78edf6f66dc       coredns-7db6d8ff4d-l4n5p
	d9b83381cff6c       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    14 minutes ago       Exited              kindnet-cni               0                   ba95977795c59       kindnet-fdmh5
	db640a7c00be2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      14 minutes ago       Exited              kube-proxy                0                   374e9c4294dfb       kube-proxy-mhbk7
	46540b0fd864e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   d0a2c28776819       etcd-ha-763049
	c31bbb31aa5f3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   d4383fe572e51       kube-scheduler-ha-763049
	
	
	==> coredns [2916a63d5ed377ceeacdd29b537026dd25dbd2c1ec74e8ba3893a3890e664348] <==
	Trace[927548521]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50994->10.96.0.1:443: read: connection reset by peer 10521ms (10:51:54.413)
	Trace[927548521]: [10.521571048s] [10.521571048s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50994->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5] <==
	[INFO] 10.244.1.2:43934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123259s
	[INFO] 10.244.1.2:52875 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143409s
	[INFO] 10.244.1.2:46242 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001561535s
	[INFO] 10.244.1.2:50316 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101745s
	[INFO] 10.244.1.2:44298 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140588s
	[INFO] 10.244.1.2:41448 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158036s
	[INFO] 10.244.0.4:38730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084044s
	[INFO] 10.244.0.4:57968 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085926s
	[INFO] 10.244.0.4:42578 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062705s
	[INFO] 10.244.2.2:38441 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139508s
	[INFO] 10.244.2.2:50163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168308s
	[INFO] 10.244.1.2:42467 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125757s
	[INFO] 10.244.1.2:39047 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140115s
	[INFO] 10.244.1.2:37057 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091358s
	[INFO] 10.244.0.4:60045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128601s
	[INFO] 10.244.0.4:32850 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078977s
	[INFO] 10.244.2.2:46995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149775s
	[INFO] 10.244.2.2:60584 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126839s
	[INFO] 10.244.2.2:54400 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169256s
	[INFO] 10.244.1.2:44674 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109219s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [838e0d2cb45ba65482ead03da1dbbe66eb325461f487e965da0988cb30e34825] <==
	Trace[1032700112]: [10.357525637s] [10.357525637s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:58234->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28] <==
	[INFO] 10.244.0.4:60709 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157374s
	[INFO] 10.244.0.4:54900 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012654s
	[INFO] 10.244.0.4:45290 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152164s
	[INFO] 10.244.2.2:52050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184737s
	[INFO] 10.244.2.2:53059 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002292108s
	[INFO] 10.244.2.2:42700 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122981s
	[INFO] 10.244.2.2:44006 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001526846s
	[INFO] 10.244.2.2:41802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169435s
	[INFO] 10.244.1.2:49560 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002026135s
	[INFO] 10.244.1.2:49037 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111642s
	[INFO] 10.244.0.4:56631 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201091s
	[INFO] 10.244.2.2:47071 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000231291s
	[INFO] 10.244.2.2:53040 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132462s
	[INFO] 10.244.1.2:50475 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008294s
	[INFO] 10.244.0.4:60819 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000157328s
	[INFO] 10.244.0.4:41267 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078502s
	[INFO] 10.244.2.2:59469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127405s
	[INFO] 10.244.1.2:46106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125503s
	[INFO] 10.244.1.2:58330 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150547s
	[INFO] 10.244.1.2:40880 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136941s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-763049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T10_39_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:39:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:53:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:52:26 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:52:26 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:52:26 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:52:26 +0000   Mon, 29 Jul 2024 10:40:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-763049
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03aa097434f1466280c9076799e841fb
	  System UUID:                03aa0974-34f1-4662-80c9-076799e841fb
	  Boot ID:                    efb539a5-e8b0-4a05-a8f7-bc957e281bdb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6s8vm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-l4n5p             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-xxwnd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-763049                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-fdmh5                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-763049             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-763049    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-mhbk7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-763049             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-763049                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 14m    kube-proxy       
	  Normal   Starting                 105s   kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m    kubelet          Node ha-763049 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m    kubelet          Node ha-763049 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m    kubelet          Node ha-763049 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m    node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-763049 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal   RegisteredNode           11m    node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Warning  ContainerGCFailed        3m28s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           99s    node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal   RegisteredNode           94s    node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal   RegisteredNode           32s    node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	
	
	Name:               ha-763049-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_41_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:41:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:53:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:52:57 +0000   Mon, 29 Jul 2024 10:52:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:52:57 +0000   Mon, 29 Jul 2024 10:52:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:52:57 +0000   Mon, 29 Jul 2024 10:52:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:52:57 +0000   Mon, 29 Jul 2024 10:52:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-763049-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa1e337eb2824257a3354f0f8d3704f1
	  System UUID:                fa1e337e-b282-4257-a335-4f0f8d3704f1
	  Boot ID:                    61e7a7a7-febc-4407-904c-0af73a5ab9b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v8wqv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-763049-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-596ll                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-763049-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-763049-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-tf7wt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-763049-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-763049-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 84s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-763049-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-763049-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-763049-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  NodeNotReady             9m16s                  node-controller  Node ha-763049-m02 status is now: NodeNotReady
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node ha-763049-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node ha-763049-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node ha-763049-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                    node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           94s                    node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           32s                    node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	
	
	Name:               ha-763049-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_42_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:42:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:54:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:53:29 +0000   Mon, 29 Jul 2024 10:42:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:53:29 +0000   Mon, 29 Jul 2024 10:42:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:53:29 +0000   Mon, 29 Jul 2024 10:42:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:53:29 +0000   Mon, 29 Jul 2024 10:42:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-763049-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c75d5420424454e9176a9ed33c59890
	  System UUID:                3c75d542-0424-454e-9176-a9ed33c59890
	  Boot ID:                    3f271e97-7330-4495-8988-ccab2b8d4848
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bsjch                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-763049-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-567mx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-763049-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-763049-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-xhcs8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-763049-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-763049-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 44s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-763049-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-763049-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-763049-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	  Normal   RegisteredNode           99s                node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	  Normal   RegisteredNode           94s                node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node ha-763049-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node ha-763049-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node ha-763049-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 63s                kubelet          Node ha-763049-m03 has been rebooted, boot id: 3f271e97-7330-4495-8988-ccab2b8d4848
	  Normal   RegisteredNode           32s                node-controller  Node ha-763049-m03 event: Registered Node ha-763049-m03 in Controller
	
	
	Name:               ha-763049-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_43_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:43:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:53:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:53:53 +0000   Mon, 29 Jul 2024 10:53:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:53:53 +0000   Mon, 29 Jul 2024 10:53:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:53:53 +0000   Mon, 29 Jul 2024 10:53:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:53:53 +0000   Mon, 29 Jul 2024 10:53:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-763049-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 656290ccdc044847a0820e68660df2c3
	  System UUID:                656290cc-dc04-4847-a082-0e68660df2c3
	  Boot ID:                    36d93227-9c4c-4b8c-ae3c-8178d24bafd5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fq6mz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-9d6sv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-763049-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-763049-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-763049-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-763049-m04 status is now: NodeReady
	  Normal   RegisteredNode           99s                node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   RegisteredNode           94s                node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   NodeNotReady             59s                node-controller  Node ha-763049-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-763049-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-763049-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-763049-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-763049-m04 has been rebooted, boot id: 36d93227-9c4c-4b8c-ae3c-8178d24bafd5
	  Normal   NodeReady                9s                 kubelet          Node ha-763049-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055472] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054857] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.202085] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.132761] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.281350] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.343760] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.067157] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.957567] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +1.681727] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.722604] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.080303] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.543544] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.094019] kauditd_printk_skb: 29 callbacks suppressed
	[Jul29 10:41] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 10:51] systemd-fstab-generator[3664]: Ignoring "noauto" option for root device
	[  +0.168774] systemd-fstab-generator[3676]: Ignoring "noauto" option for root device
	[  +0.190916] systemd-fstab-generator[3690]: Ignoring "noauto" option for root device
	[  +0.150637] systemd-fstab-generator[3702]: Ignoring "noauto" option for root device
	[  +0.300397] systemd-fstab-generator[3731]: Ignoring "noauto" option for root device
	[  +9.303302] systemd-fstab-generator[3832]: Ignoring "noauto" option for root device
	[  +0.087720] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.749374] kauditd_printk_skb: 12 callbacks suppressed
	[ +13.402963] kauditd_printk_skb: 97 callbacks suppressed
	[  +9.059006] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 10:52] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8] <==
	2024/07/29 10:49:44 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-29T10:49:44.336393Z","caller":"traceutil/trace.go:171","msg":"trace[1868624084] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; }","duration":"463.472429ms","start":"2024-07-29T10:49:43.872918Z","end":"2024-07-29T10:49:44.336391Z","steps":["trace[1868624084] 'agreement among raft nodes before linearized reading'  (duration: 445.744056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:49:44.343599Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T10:49:43.872905Z","time spent":"470.678937ms","remote":"127.0.0.1:52622","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "}
	2024/07/29 10:49:44 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T10:49:44.36369Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4945956236695851233,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-29T10:49:44.452251Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T10:49:44.452398Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T10:49:44.452468Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"821abe7be15f44a3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T10:49:44.452839Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.452934Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.452996Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.453075Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.453151Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.453212Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.453253Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.453282Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.453318Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.45337Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.453509Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.453591Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.453649Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.453681Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.4574Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-07-29T10:49:44.457571Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-07-29T10:49:44.457606Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-763049","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"]}
	
	
	==> etcd [7396dba0659812fa98e24a3bd4cf548afa87c5e6c85d3be56f89719d5fad6177] <==
	{"level":"warn","ts":"2024-07-29T10:52:58.262519Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7fcd25f19598e910","rtt":"0s","error":"dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:00.186314Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.123:2380/version","remote-member-id":"7fcd25f19598e910","error":"Get \"https://192.168.39.123:2380/version\": dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:00.186392Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7fcd25f19598e910","error":"Get \"https://192.168.39.123:2380/version\": dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:03.262834Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7fcd25f19598e910","rtt":"0s","error":"dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:03.263911Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7fcd25f19598e910","rtt":"0s","error":"dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:04.189101Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.123:2380/version","remote-member-id":"7fcd25f19598e910","error":"Get \"https://192.168.39.123:2380/version\": dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:04.189251Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7fcd25f19598e910","error":"Get \"https://192.168.39.123:2380/version\": dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T10:53:08.024959Z","caller":"traceutil/trace.go:171","msg":"trace[788629632] transaction","detail":"{read_only:false; response_revision:2375; number_of_response:1; }","duration":"119.062504ms","start":"2024-07-29T10:53:07.905869Z","end":"2024-07-29T10:53:08.024931Z","steps":["trace[788629632] 'process raft request'  (duration: 118.955832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:53:08.190843Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.123:2380/version","remote-member-id":"7fcd25f19598e910","error":"Get \"https://192.168.39.123:2380/version\": dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:08.190894Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7fcd25f19598e910","error":"Get \"https://192.168.39.123:2380/version\": dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:08.26356Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7fcd25f19598e910","rtt":"0s","error":"dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:08.264717Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7fcd25f19598e910","rtt":"0s","error":"dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:11.7599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.261289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T10:53:11.760448Z","caller":"traceutil/trace.go:171","msg":"trace[458044155] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2387; }","duration":"146.879063ms","start":"2024-07-29T10:53:11.613534Z","end":"2024-07-29T10:53:11.76042Z","steps":["trace[458044155] 'range keys from in-memory index tree'  (duration: 145.248331ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:53:11.760306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.092968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-763049-m03\" ","response":"range_response_count:1 size:6894"}
	{"level":"info","ts":"2024-07-29T10:53:11.76146Z","caller":"traceutil/trace.go:171","msg":"trace[166991519] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-763049-m03; range_end:; response_count:1; response_revision:2387; }","duration":"134.280835ms","start":"2024-07-29T10:53:11.627165Z","end":"2024-07-29T10:53:11.761446Z","steps":["trace[166991519] 'range keys from in-memory index tree'  (duration: 131.83934ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:53:12.193143Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.123:2380/version","remote-member-id":"7fcd25f19598e910","error":"Get \"https://192.168.39.123:2380/version\": dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T10:53:12.193223Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7fcd25f19598e910","error":"Get \"https://192.168.39.123:2380/version\": dial tcp 192.168.39.123:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T10:53:12.491095Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:53:12.491161Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:53:12.495024Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:53:12.510174Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"821abe7be15f44a3","to":"7fcd25f19598e910","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T10:53:12.510223Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:53:12.514911Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"821abe7be15f44a3","to":"7fcd25f19598e910","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T10:53:12.515122Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	
	
	==> kernel <==
	 10:54:02 up 15 min,  0 users,  load average: 0.23, 0.30, 0.23
	Linux ha-763049 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8a03b1375072888e734a6644c6bad91c3d71fc0f679dbf3ffa29a8e69acf7645] <==
	I0729 10:53:23.486990       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:53:33.479837       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:53:33.479955       1 main.go:299] handling current node
	I0729 10:53:33.480006       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:53:33.480012       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:53:33.480171       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:53:33.480192       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:53:33.480323       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:53:33.480347       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:53:43.488448       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:53:43.488506       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:53:43.488695       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:53:43.488725       1 main.go:299] handling current node
	I0729 10:53:43.488867       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:53:43.489004       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:53:43.489108       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:53:43.489133       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:53:53.480862       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:53:53.480923       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:53:53.481165       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:53:53.481211       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:53:53.481335       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:53:53.481356       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:53:53.481423       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:53:53.481729       1 main.go:299] handling current node
	
	
	==> kindnet [d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa] <==
	I0729 10:49:15.640717       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:49:15.640811       1 main.go:299] handling current node
	I0729 10:49:15.640840       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:49:15.640847       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:49:15.641014       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:49:15.641021       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:49:15.641077       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:49:15.641101       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:49:25.641347       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:49:25.641510       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:49:25.641943       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:49:25.642051       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:49:25.642339       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:49:25.642373       1 main.go:299] handling current node
	I0729 10:49:25.642465       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:49:25.642490       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:49:35.641198       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:49:35.641545       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:49:35.641978       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:49:35.642063       1 main.go:299] handling current node
	I0729 10:49:35.642109       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:49:35.642182       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:49:35.642386       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:49:35.642416       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	E0729 10:49:42.660538       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [5b20547718a252aad997ed2c0df2cae435f48691718e9f4446ab41c7fefc3519] <==
	I0729 10:52:15.625938       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 10:52:15.643691       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0729 10:52:15.645789       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0729 10:52:15.715669       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 10:52:15.715897       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 10:52:15.715961       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 10:52:15.717285       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 10:52:15.717721       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 10:52:15.722447       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 10:52:15.725834       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 10:52:15.731905       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 10:52:15.734691       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 10:52:15.734789       1 policy_source.go:224] refreshing policies
	W0729 10:52:15.736852       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.39]
	I0729 10:52:15.738212       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 10:52:15.746829       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 10:52:15.746901       1 aggregator.go:165] initial CRD sync complete...
	I0729 10:52:15.746943       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 10:52:15.746966       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 10:52:15.746989       1 cache.go:39] Caches are synced for autoregister controller
	I0729 10:52:15.750358       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 10:52:15.754912       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 10:52:15.813412       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 10:52:16.635287       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 10:52:17.271613       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.39 192.168.39.68]
	
	
	==> kube-apiserver [dbc91bcec9ad6c6a181434711095acd4a32d53d7c61281c2221fafbfcb2d88c8] <==
	I0729 10:51:32.752466       1 options.go:221] external host was not specified, using 192.168.39.68
	I0729 10:51:32.754544       1 server.go:148] Version: v1.30.3
	I0729 10:51:32.754623       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:51:33.386728       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 10:51:33.392784       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 10:51:33.400014       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 10:51:33.400069       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 10:51:33.400303       1 instance.go:299] Using reconciler: lease
	W0729 10:51:53.382216       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 10:51:53.386657       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0729 10:51:53.406548       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0729 10:51:53.406567       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [09674fa79f4c54a1bfa82578aab9e18a1d7c138f0df16c700103bc57444b8338] <==
	I0729 10:52:28.334100       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-763049-m02"
	I0729 10:52:28.334169       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-763049-m03"
	I0729 10:52:28.334188       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-763049-m04"
	I0729 10:52:28.334247       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-763049"
	I0729 10:52:28.334366       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 10:52:28.405446       1 shared_informer.go:320] Caches are synced for deployment
	I0729 10:52:28.407605       1 shared_informer.go:320] Caches are synced for disruption
	I0729 10:52:28.436598       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 10:52:28.444830       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 10:52:28.863450       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 10:52:28.863530       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 10:52:28.900497       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 10:52:34.809877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.336µs"
	I0729 10:52:36.360415       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rl6mg EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rl6mg\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 10:52:36.361710       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"ebaccad4-367e-4bfa-80e8-c9b0f9dadd46", APIVersion:"v1", ResourceVersion:"233", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rl6mg EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rl6mg": the object has been modified; please apply your changes to the latest version and try again
	I0729 10:52:36.385229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.602855ms"
	I0729 10:52:36.398511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.14945ms"
	I0729 10:52:36.398972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="106.01µs"
	I0729 10:52:40.391697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.780603ms"
	I0729 10:52:40.392119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.891µs"
	I0729 10:53:00.028015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.618712ms"
	I0729 10:53:00.028178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.763µs"
	I0729 10:53:20.545084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.898061ms"
	I0729 10:53:20.547433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.385µs"
	I0729 10:53:53.544507       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-763049-m04"
	
	
	==> kube-controller-manager [e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55] <==
	I0729 10:51:33.399254       1 serving.go:380] Generated self-signed cert in-memory
	I0729 10:51:33.878113       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 10:51:33.878152       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:51:33.880204       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 10:51:33.881583       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 10:51:33.881968       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 10:51:33.882075       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 10:51:54.414530       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.68:8443/healthz\": dial tcp 192.168.39.68:8443: connect: connection refused"
	
	
	==> kube-proxy [db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8] <==
	E0729 10:48:32.481308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:35.552347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:35.553253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:35.554069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:35.554318       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:38.625002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:38.625057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:41.697040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:41.697342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:41.697506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:41.697557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:47.841791       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:47.841941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:57.059441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:57.059891       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:57.060111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:57.060166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:49:00.129139       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:49:00.129292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:49:18.563147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:49:18.563345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:49:24.705238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:49:24.705293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:49:24.705452       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:49:24.705473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [e449897d6adc73eae51d5ff58c89a89fb21ef06566ca3706ebf539c2e1db07b7] <==
	I0729 10:51:34.146641       1 server_linux.go:69] "Using iptables proxy"
	E0729 10:51:36.801729       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 10:51:39.872806       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 10:51:42.945291       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 10:51:49.089801       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 10:51:58.304285       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 10:52:16.739936       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 10:52:16.739993       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0729 10:52:16.955861       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 10:52:16.955964       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 10:52:16.955983       1 server_linux.go:165] "Using iptables Proxier"
	I0729 10:52:16.970994       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 10:52:16.971254       1 server.go:872] "Version info" version="v1.30.3"
	I0729 10:52:16.971285       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:52:16.973963       1 config.go:192] "Starting service config controller"
	I0729 10:52:16.974933       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 10:52:16.974972       1 config.go:101] "Starting endpoint slice config controller"
	I0729 10:52:16.975370       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 10:52:16.977817       1 config.go:319] "Starting node config controller"
	I0729 10:52:16.977909       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 10:52:17.075287       1 shared_informer.go:320] Caches are synced for service config
	I0729 10:52:17.077464       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 10:52:17.078115       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3a72aa191af240183804ae6c9a099f1718531bb8abe5eb81b8d356a35b4d93cb] <==
	W0729 10:52:11.564400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:11.564501       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:11.799448       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.68:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:11.799514       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.68:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:11.879653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.68:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:11.879721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.68:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:12.256622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.68:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:12.256674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.68:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:12.921853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:12.921905       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:13.026258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.68:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:13.026323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.68:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:13.107436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.68:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:13.107503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.68:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:15.660189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 10:52:15.660331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 10:52:15.660563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 10:52:15.660632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 10:52:15.660892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 10:52:15.660972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 10:52:15.661218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 10:52:15.661306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 10:52:15.660928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 10:52:15.665857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0729 10:52:32.721702       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3] <==
	W0729 10:49:39.070829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 10:49:39.070878       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 10:49:39.631520       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 10:49:39.631567       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 10:49:40.297236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 10:49:40.297343       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 10:49:40.334993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 10:49:40.335113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 10:49:40.419065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 10:49:40.419176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 10:49:40.454479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 10:49:40.454568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 10:49:40.494022       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 10:49:40.494156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 10:49:40.608038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 10:49:40.608090       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 10:49:40.725835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 10:49:40.725949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 10:49:41.151859       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 10:49:41.151902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 10:49:41.183165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 10:49:41.183217       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 10:49:41.895061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 10:49:41.895169       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 10:49:44.292065       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 10:52:13 ha-763049 kubelet[1375]: E0729 10:52:13.664221    1375 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-763049\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 10:52:13 ha-763049 kubelet[1375]: I0729 10:52:13.665092    1375 status_manager.go:853] "Failed to get status for pod" podUID="b05b91ac-ef64-4bd2-9824-83723bddfef7" pod="kube-system/kube-proxy-mhbk7" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhbk7\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 10:52:14 ha-763049 kubelet[1375]: I0729 10:52:14.221328    1375 scope.go:117] "RemoveContainer" containerID="e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55"
	Jul 29 10:52:16 ha-763049 kubelet[1375]: I0729 10:52:16.737171    1375 status_manager.go:853] "Failed to get status for pod" podUID="4ed222fa-9517-42bb-bbde-6632f91bda05" pod="kube-system/kindnet-fdmh5" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-fdmh5\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 10:52:16 ha-763049 kubelet[1375]: E0729 10:52:16.737213    1375 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-763049\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 10:52:16 ha-763049 kubelet[1375]: E0729 10:52:16.737720    1375 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 29 10:52:26 ha-763049 kubelet[1375]: I0729 10:52:26.222000    1375 scope.go:117] "RemoveContainer" containerID="381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b"
	Jul 29 10:52:26 ha-763049 kubelet[1375]: E0729 10:52:26.222867    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d48db391-d5bb-4974-88d7-f5c71e3edb4a)\"" pod="kube-system/storage-provisioner" podUID="d48db391-d5bb-4974-88d7-f5c71e3edb4a"
	Jul 29 10:52:34 ha-763049 kubelet[1375]: E0729 10:52:34.245109    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:52:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:52:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:52:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:52:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:52:40 ha-763049 kubelet[1375]: I0729 10:52:40.221957    1375 scope.go:117] "RemoveContainer" containerID="381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b"
	Jul 29 10:52:40 ha-763049 kubelet[1375]: E0729 10:52:40.222211    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d48db391-d5bb-4974-88d7-f5c71e3edb4a)\"" pod="kube-system/storage-provisioner" podUID="d48db391-d5bb-4974-88d7-f5c71e3edb4a"
	Jul 29 10:52:54 ha-763049 kubelet[1375]: I0729 10:52:54.234819    1375 scope.go:117] "RemoveContainer" containerID="381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b"
	Jul 29 10:53:00 ha-763049 kubelet[1375]: I0729 10:53:00.565384    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-6s8vm" podStartSLOduration=601.6788046 podStartE2EDuration="10m4.56529912s" podCreationTimestamp="2024-07-29 10:42:56 +0000 UTC" firstStartedPulling="2024-07-29 10:42:57.246528262 +0000 UTC m=+203.194715420" lastFinishedPulling="2024-07-29 10:43:00.133022777 +0000 UTC m=+206.081209940" observedRunningTime="2024-07-29 10:43:01.135394054 +0000 UTC m=+207.083581221" watchObservedRunningTime="2024-07-29 10:53:00.56529912 +0000 UTC m=+806.513486290"
	Jul 29 10:53:03 ha-763049 kubelet[1375]: I0729 10:53:03.221245    1375 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-763049" podUID="5f88bfd4-d887-4989-bf71-7a4459aa6655"
	Jul 29 10:53:03 ha-763049 kubelet[1375]: I0729 10:53:03.239598    1375 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-763049"
	Jul 29 10:53:04 ha-763049 kubelet[1375]: I0729 10:53:04.253030    1375 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-763049" podUID="5f88bfd4-d887-4989-bf71-7a4459aa6655"
	Jul 29 10:53:34 ha-763049 kubelet[1375]: E0729 10:53:34.239276    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:53:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:53:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:53:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:53:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 10:54:01.412772   30409 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19337-3845/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-763049 -n ha-763049
helpers_test.go:261: (dbg) Run:  kubectl --context ha-763049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (382.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 stop -v=7 --alsologtostderr
E0729 10:54:57.915947   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 stop -v=7 --alsologtostderr: exit status 82 (2m0.462430113s)

                                                
                                                
-- stdout --
	* Stopping node "ha-763049-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:54:21.543765   30819 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:54:21.544023   30819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:54:21.544033   30819 out.go:304] Setting ErrFile to fd 2...
	I0729 10:54:21.544039   30819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:54:21.544263   30819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:54:21.544499   30819 out.go:298] Setting JSON to false
	I0729 10:54:21.544600   30819 mustload.go:65] Loading cluster: ha-763049
	I0729 10:54:21.544979   30819 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:54:21.545073   30819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:54:21.545273   30819 mustload.go:65] Loading cluster: ha-763049
	I0729 10:54:21.545423   30819 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:54:21.545458   30819 stop.go:39] StopHost: ha-763049-m04
	I0729 10:54:21.545805   30819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:54:21.545873   30819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:54:21.560335   30819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0729 10:54:21.560771   30819 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:54:21.561254   30819 main.go:141] libmachine: Using API Version  1
	I0729 10:54:21.561275   30819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:54:21.561623   30819 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:54:21.563913   30819 out.go:177] * Stopping node "ha-763049-m04"  ...
	I0729 10:54:21.565553   30819 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 10:54:21.565588   30819 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:54:21.565800   30819 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 10:54:21.565822   30819 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:54:21.568657   30819 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:54:21.569213   30819 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:53:47 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:54:21.569250   30819 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:54:21.569516   30819 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:54:21.569663   30819 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:54:21.569851   30819 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:54:21.569980   30819 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	I0729 10:54:21.649599   30819 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 10:54:21.703722   30819 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 10:54:21.757515   30819 main.go:141] libmachine: Stopping "ha-763049-m04"...
	I0729 10:54:21.757549   30819 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:54:21.759546   30819 main.go:141] libmachine: (ha-763049-m04) Calling .Stop
	I0729 10:54:21.763257   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 0/120
	I0729 10:54:22.765401   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 1/120
	I0729 10:54:23.766828   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 2/120
	I0729 10:54:24.768295   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 3/120
	I0729 10:54:25.769705   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 4/120
	I0729 10:54:26.771685   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 5/120
	I0729 10:54:27.773257   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 6/120
	I0729 10:54:28.774543   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 7/120
	I0729 10:54:29.775923   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 8/120
	I0729 10:54:30.777391   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 9/120
	I0729 10:54:31.779640   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 10/120
	I0729 10:54:32.780934   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 11/120
	I0729 10:54:33.782271   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 12/120
	I0729 10:54:34.783958   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 13/120
	I0729 10:54:35.785346   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 14/120
	I0729 10:54:36.787524   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 15/120
	I0729 10:54:37.789106   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 16/120
	I0729 10:54:38.790819   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 17/120
	I0729 10:54:39.792693   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 18/120
	I0729 10:54:40.794260   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 19/120
	I0729 10:54:41.796646   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 20/120
	I0729 10:54:42.798178   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 21/120
	I0729 10:54:43.799682   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 22/120
	I0729 10:54:44.801427   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 23/120
	I0729 10:54:45.802680   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 24/120
	I0729 10:54:46.804319   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 25/120
	I0729 10:54:47.805772   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 26/120
	I0729 10:54:48.807192   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 27/120
	I0729 10:54:49.809644   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 28/120
	I0729 10:54:50.811709   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 29/120
	I0729 10:54:51.813670   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 30/120
	I0729 10:54:52.815574   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 31/120
	I0729 10:54:53.816865   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 32/120
	I0729 10:54:54.818360   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 33/120
	I0729 10:54:55.819746   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 34/120
	I0729 10:54:56.821654   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 35/120
	I0729 10:54:57.823531   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 36/120
	I0729 10:54:58.824920   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 37/120
	I0729 10:54:59.826706   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 38/120
	I0729 10:55:00.827974   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 39/120
	I0729 10:55:01.830239   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 40/120
	I0729 10:55:02.831617   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 41/120
	I0729 10:55:03.833319   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 42/120
	I0729 10:55:04.834809   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 43/120
	I0729 10:55:05.836401   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 44/120
	I0729 10:55:06.837710   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 45/120
	I0729 10:55:07.839301   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 46/120
	I0729 10:55:08.840815   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 47/120
	I0729 10:55:09.842327   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 48/120
	I0729 10:55:10.843924   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 49/120
	I0729 10:55:11.845555   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 50/120
	I0729 10:55:12.847339   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 51/120
	I0729 10:55:13.848761   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 52/120
	I0729 10:55:14.850274   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 53/120
	I0729 10:55:15.852213   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 54/120
	I0729 10:55:16.853984   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 55/120
	I0729 10:55:17.855630   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 56/120
	I0729 10:55:18.857150   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 57/120
	I0729 10:55:19.858598   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 58/120
	I0729 10:55:20.859743   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 59/120
	I0729 10:55:21.861888   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 60/120
	I0729 10:55:22.863431   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 61/120
	I0729 10:55:23.865229   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 62/120
	I0729 10:55:24.866807   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 63/120
	I0729 10:55:25.868271   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 64/120
	I0729 10:55:26.870229   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 65/120
	I0729 10:55:27.872538   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 66/120
	I0729 10:55:28.874105   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 67/120
	I0729 10:55:29.875633   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 68/120
	I0729 10:55:30.876930   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 69/120
	I0729 10:55:31.878765   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 70/120
	I0729 10:55:32.880240   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 71/120
	I0729 10:55:33.881557   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 72/120
	I0729 10:55:34.882884   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 73/120
	I0729 10:55:35.884441   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 74/120
	I0729 10:55:36.886366   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 75/120
	I0729 10:55:37.887727   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 76/120
	I0729 10:55:38.888975   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 77/120
	I0729 10:55:39.890633   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 78/120
	I0729 10:55:40.892143   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 79/120
	I0729 10:55:41.894204   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 80/120
	I0729 10:55:42.895919   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 81/120
	I0729 10:55:43.897262   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 82/120
	I0729 10:55:44.898718   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 83/120
	I0729 10:55:45.900140   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 84/120
	I0729 10:55:46.901723   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 85/120
	I0729 10:55:47.903866   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 86/120
	I0729 10:55:48.905578   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 87/120
	I0729 10:55:49.907264   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 88/120
	I0729 10:55:50.909471   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 89/120
	I0729 10:55:51.911497   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 90/120
	I0729 10:55:52.913044   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 91/120
	I0729 10:55:53.915172   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 92/120
	I0729 10:55:54.916872   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 93/120
	I0729 10:55:55.918180   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 94/120
	I0729 10:55:56.920100   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 95/120
	I0729 10:55:57.921515   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 96/120
	I0729 10:55:58.922618   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 97/120
	I0729 10:55:59.924060   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 98/120
	I0729 10:56:00.925249   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 99/120
	I0729 10:56:01.927282   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 100/120
	I0729 10:56:02.928909   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 101/120
	I0729 10:56:03.930549   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 102/120
	I0729 10:56:04.931875   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 103/120
	I0729 10:56:05.933445   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 104/120
	I0729 10:56:06.935251   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 105/120
	I0729 10:56:07.937361   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 106/120
	I0729 10:56:08.938726   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 107/120
	I0729 10:56:09.940708   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 108/120
	I0729 10:56:10.941920   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 109/120
	I0729 10:56:11.943531   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 110/120
	I0729 10:56:12.944955   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 111/120
	I0729 10:56:13.946310   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 112/120
	I0729 10:56:14.947618   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 113/120
	I0729 10:56:15.949123   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 114/120
	I0729 10:56:16.951144   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 115/120
	I0729 10:56:17.953345   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 116/120
	I0729 10:56:18.954788   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 117/120
	I0729 10:56:19.956428   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 118/120
	I0729 10:56:20.957757   30819 main.go:141] libmachine: (ha-763049-m04) Waiting for machine to stop 119/120
	I0729 10:56:21.958716   30819 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 10:56:21.958791   30819 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 10:56:21.960677   30819 out.go:177] 
	W0729 10:56:21.961973   30819 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 10:56:21.961984   30819 out.go:239] * 
	* 
	W0729 10:56:21.964213   30819 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:56:21.965541   30819 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-763049 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr: exit status 3 (19.043588782s)

                                                
                                                
-- stdout --
	ha-763049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-763049-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:56:22.010112   31236 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:56:22.010207   31236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:56:22.010216   31236 out.go:304] Setting ErrFile to fd 2...
	I0729 10:56:22.010220   31236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:56:22.010370   31236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:56:22.010518   31236 out.go:298] Setting JSON to false
	I0729 10:56:22.010542   31236 mustload.go:65] Loading cluster: ha-763049
	I0729 10:56:22.010588   31236 notify.go:220] Checking for updates...
	I0729 10:56:22.010918   31236 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:56:22.010934   31236 status.go:255] checking status of ha-763049 ...
	I0729 10:56:22.011290   31236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:56:22.011350   31236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:56:22.029711   31236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41769
	I0729 10:56:22.030102   31236 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:56:22.030720   31236 main.go:141] libmachine: Using API Version  1
	I0729 10:56:22.030756   31236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:56:22.031078   31236 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:56:22.031286   31236 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:56:22.032598   31236 status.go:330] ha-763049 host status = "Running" (err=<nil>)
	I0729 10:56:22.032615   31236 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:56:22.032979   31236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:56:22.033025   31236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:56:22.047472   31236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35405
	I0729 10:56:22.047837   31236 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:56:22.048272   31236 main.go:141] libmachine: Using API Version  1
	I0729 10:56:22.048307   31236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:56:22.048600   31236 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:56:22.048769   31236 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:56:22.051539   31236 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:56:22.051958   31236 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:56:22.051990   31236 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:56:22.052084   31236 host.go:66] Checking if "ha-763049" exists ...
	I0729 10:56:22.052454   31236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:56:22.052515   31236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:56:22.067596   31236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38293
	I0729 10:56:22.068010   31236 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:56:22.068432   31236 main.go:141] libmachine: Using API Version  1
	I0729 10:56:22.068449   31236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:56:22.068712   31236 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:56:22.068886   31236 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:56:22.069099   31236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:56:22.069131   31236 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:56:22.071696   31236 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:56:22.072215   31236 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:56:22.072234   31236 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:56:22.072402   31236 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:56:22.072590   31236 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:56:22.072759   31236 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:56:22.072876   31236 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:56:22.156300   31236 ssh_runner.go:195] Run: systemctl --version
	I0729 10:56:22.163862   31236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:56:22.181195   31236 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:56:22.181228   31236 api_server.go:166] Checking apiserver status ...
	I0729 10:56:22.181281   31236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:56:22.201315   31236 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5129/cgroup
	W0729 10:56:22.218062   31236 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5129/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:56:22.218156   31236 ssh_runner.go:195] Run: ls
	I0729 10:56:22.223195   31236 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:56:22.227556   31236 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:56:22.227581   31236 status.go:422] ha-763049 apiserver status = Running (err=<nil>)
	I0729 10:56:22.227592   31236 status.go:257] ha-763049 status: &{Name:ha-763049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:56:22.227609   31236 status.go:255] checking status of ha-763049-m02 ...
	I0729 10:56:22.228025   31236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:56:22.228072   31236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:56:22.243212   31236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0729 10:56:22.243628   31236 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:56:22.244065   31236 main.go:141] libmachine: Using API Version  1
	I0729 10:56:22.244086   31236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:56:22.244475   31236 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:56:22.244705   31236 main.go:141] libmachine: (ha-763049-m02) Calling .GetState
	I0729 10:56:22.246724   31236 status.go:330] ha-763049-m02 host status = "Running" (err=<nil>)
	I0729 10:56:22.246751   31236 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:56:22.247182   31236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:56:22.247231   31236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:56:22.264882   31236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45517
	I0729 10:56:22.265305   31236 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:56:22.265822   31236 main.go:141] libmachine: Using API Version  1
	I0729 10:56:22.265853   31236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:56:22.266185   31236 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:56:22.266376   31236 main.go:141] libmachine: (ha-763049-m02) Calling .GetIP
	I0729 10:56:22.269010   31236 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:56:22.269426   31236 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:51:37 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:56:22.269449   31236 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:56:22.269598   31236 host.go:66] Checking if "ha-763049-m02" exists ...
	I0729 10:56:22.269932   31236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:56:22.269964   31236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:56:22.285188   31236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41599
	I0729 10:56:22.285637   31236 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:56:22.286138   31236 main.go:141] libmachine: Using API Version  1
	I0729 10:56:22.286157   31236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:56:22.286438   31236 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:56:22.286615   31236 main.go:141] libmachine: (ha-763049-m02) Calling .DriverName
	I0729 10:56:22.286805   31236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:56:22.286829   31236 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHHostname
	I0729 10:56:22.289604   31236 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:56:22.290001   31236 main.go:141] libmachine: (ha-763049-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:91:e5", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:51:37 +0000 UTC Type:0 Mac:52:54:00:d3:91:e5 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-763049-m02 Clientid:01:52:54:00:d3:91:e5}
	I0729 10:56:22.290040   31236 main.go:141] libmachine: (ha-763049-m02) DBG | domain ha-763049-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:d3:91:e5 in network mk-ha-763049
	I0729 10:56:22.290179   31236 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHPort
	I0729 10:56:22.290338   31236 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHKeyPath
	I0729 10:56:22.290492   31236 main.go:141] libmachine: (ha-763049-m02) Calling .GetSSHUsername
	I0729 10:56:22.290620   31236 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m02/id_rsa Username:docker}
	I0729 10:56:22.371929   31236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:56:22.390649   31236 kubeconfig.go:125] found "ha-763049" server: "https://192.168.39.254:8443"
	I0729 10:56:22.390675   31236 api_server.go:166] Checking apiserver status ...
	I0729 10:56:22.390725   31236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:56:22.407863   31236 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0729 10:56:22.420941   31236 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:56:22.421004   31236 ssh_runner.go:195] Run: ls
	I0729 10:56:22.425619   31236 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 10:56:22.430217   31236 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 10:56:22.430244   31236 status.go:422] ha-763049-m02 apiserver status = Running (err=<nil>)
	I0729 10:56:22.430252   31236 status.go:257] ha-763049-m02 status: &{Name:ha-763049-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:56:22.430264   31236 status.go:255] checking status of ha-763049-m04 ...
	I0729 10:56:22.430532   31236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:56:22.430563   31236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:56:22.445699   31236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46851
	I0729 10:56:22.446071   31236 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:56:22.446523   31236 main.go:141] libmachine: Using API Version  1
	I0729 10:56:22.446542   31236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:56:22.446855   31236 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:56:22.447059   31236 main.go:141] libmachine: (ha-763049-m04) Calling .GetState
	I0729 10:56:22.448423   31236 status.go:330] ha-763049-m04 host status = "Running" (err=<nil>)
	I0729 10:56:22.448441   31236 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:56:22.448709   31236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:56:22.448739   31236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:56:22.463678   31236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0729 10:56:22.464179   31236 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:56:22.464674   31236 main.go:141] libmachine: Using API Version  1
	I0729 10:56:22.464695   31236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:56:22.465015   31236 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:56:22.465180   31236 main.go:141] libmachine: (ha-763049-m04) Calling .GetIP
	I0729 10:56:22.468179   31236 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:56:22.468535   31236 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:53:47 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:56:22.468563   31236 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:56:22.468653   31236 host.go:66] Checking if "ha-763049-m04" exists ...
	I0729 10:56:22.468934   31236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:56:22.468973   31236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:56:22.483214   31236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33399
	I0729 10:56:22.483550   31236 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:56:22.484013   31236 main.go:141] libmachine: Using API Version  1
	I0729 10:56:22.484033   31236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:56:22.484321   31236 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:56:22.484492   31236 main.go:141] libmachine: (ha-763049-m04) Calling .DriverName
	I0729 10:56:22.484666   31236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:56:22.484686   31236 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHHostname
	I0729 10:56:22.487479   31236 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:56:22.487945   31236 main.go:141] libmachine: (ha-763049-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:89:2a", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:53:47 +0000 UTC Type:0 Mac:52:54:00:c9:89:2a Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-763049-m04 Clientid:01:52:54:00:c9:89:2a}
	I0729 10:56:22.487967   31236 main.go:141] libmachine: (ha-763049-m04) DBG | domain ha-763049-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:c9:89:2a in network mk-ha-763049
	I0729 10:56:22.488104   31236 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHPort
	I0729 10:56:22.488277   31236 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHKeyPath
	I0729 10:56:22.488420   31236 main.go:141] libmachine: (ha-763049-m04) Calling .GetSSHUsername
	I0729 10:56:22.488591   31236 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049-m04/id_rsa Username:docker}
	W0729 10:56:41.010903   31236 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0729 10:56:41.011019   31236 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0729 10:56:41.011043   31236 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0729 10:56:41.011054   31236 status.go:257] ha-763049-m04 status: &{Name:ha-763049-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:56:41.011077   31236 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-763049 -n ha-763049
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-763049 logs -n 25: (1.85629209s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-763049 ssh -n ha-763049-m02 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m03_ha-763049-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04:/home/docker/cp-test_ha-763049-m03_ha-763049-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m04 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m03_ha-763049-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp testdata/cp-test.txt                                               | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile250926512/001/cp-test_ha-763049-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049:/home/docker/cp-test_ha-763049-m04_ha-763049.txt                      |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049 sudo cat                                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049.txt                                |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m02:/home/docker/cp-test_ha-763049-m04_ha-763049-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m02 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m03:/home/docker/cp-test_ha-763049-m04_ha-763049-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n                                                                | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | ha-763049-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-763049 ssh -n ha-763049-m03 sudo cat                                         | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC | 29 Jul 24 10:44 UTC |
	|         | /home/docker/cp-test_ha-763049-m04_ha-763049-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-763049 node stop m02 -v=7                                                    | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-763049 node start m02 -v=7                                                   | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-763049 -v=7                                                          | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-763049 -v=7                                                               | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-763049 --wait=true -v=7                                                   | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:49 UTC | 29 Jul 24 10:54 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-763049                                                               | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:54 UTC |                     |
	| node    | ha-763049 node delete m03 -v=7                                                  | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:54 UTC | 29 Jul 24 10:54 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-763049 stop -v=7                                                             | ha-763049 | jenkins | v1.33.1 | 29 Jul 24 10:54 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:49:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:49:43.408985   29021 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:49:43.409270   29021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:49:43.409284   29021 out.go:304] Setting ErrFile to fd 2...
	I0729 10:49:43.409289   29021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:49:43.409507   29021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:49:43.410079   29021 out.go:298] Setting JSON to false
	I0729 10:49:43.411085   29021 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1929,"bootTime":1722248254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:49:43.411154   29021 start.go:139] virtualization: kvm guest
	I0729 10:49:43.413383   29021 out.go:177] * [ha-763049] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:49:43.414957   29021 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:49:43.414960   29021 notify.go:220] Checking for updates...
	I0729 10:49:43.416388   29021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:49:43.418001   29021 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:49:43.419182   29021 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:49:43.420413   29021 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 10:49:43.421713   29021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:49:43.423461   29021 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:49:43.423569   29021 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:49:43.424019   29021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:49:43.424091   29021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:49:43.438976   29021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I0729 10:49:43.439378   29021 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:49:43.440054   29021 main.go:141] libmachine: Using API Version  1
	I0729 10:49:43.440083   29021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:49:43.440513   29021 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:49:43.440798   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:49:43.475335   29021 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 10:49:43.476670   29021 start.go:297] selected driver: kvm2
	I0729 10:49:43.476684   29021 start.go:901] validating driver "kvm2" against &{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:49:43.476809   29021 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:49:43.477189   29021 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:49:43.477304   29021 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:49:43.492313   29021 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:49:43.492949   29021 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:49:43.493006   29021 cni.go:84] Creating CNI manager for ""
	I0729 10:49:43.493019   29021 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 10:49:43.493075   29021 start.go:340] cluster config:
	{Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:49:43.493198   29021 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:49:43.495666   29021 out.go:177] * Starting "ha-763049" primary control-plane node in "ha-763049" cluster
	I0729 10:49:43.497054   29021 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:49:43.497105   29021 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:49:43.497130   29021 cache.go:56] Caching tarball of preloaded images
	I0729 10:49:43.497237   29021 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:49:43.497252   29021 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:49:43.497375   29021 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/config.json ...
	I0729 10:49:43.497619   29021 start.go:360] acquireMachinesLock for ha-763049: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:49:43.497690   29021 start.go:364] duration metric: took 42.837µs to acquireMachinesLock for "ha-763049"
	I0729 10:49:43.497712   29021 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:49:43.497721   29021 fix.go:54] fixHost starting: 
	I0729 10:49:43.497998   29021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:49:43.498038   29021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:49:43.512187   29021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
	I0729 10:49:43.512573   29021 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:49:43.513081   29021 main.go:141] libmachine: Using API Version  1
	I0729 10:49:43.513109   29021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:49:43.513503   29021 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:49:43.513704   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:49:43.513889   29021 main.go:141] libmachine: (ha-763049) Calling .GetState
	I0729 10:49:43.515399   29021 fix.go:112] recreateIfNeeded on ha-763049: state=Running err=<nil>
	W0729 10:49:43.515434   29021 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:49:43.517293   29021 out.go:177] * Updating the running kvm2 "ha-763049" VM ...
	I0729 10:49:43.518507   29021 machine.go:94] provisionDockerMachine start ...
	I0729 10:49:43.518523   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:49:43.518733   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:43.521014   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.521521   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:43.521544   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.521701   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:49:43.521877   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.522073   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.522214   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:49:43.522394   29021 main.go:141] libmachine: Using SSH client type: native
	I0729 10:49:43.522574   29021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:49:43.522584   29021 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 10:49:43.631834   29021 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-763049
	
	I0729 10:49:43.631857   29021 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:49:43.632087   29021 buildroot.go:166] provisioning hostname "ha-763049"
	I0729 10:49:43.632112   29021 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:49:43.632338   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:43.634879   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.635224   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:43.635251   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.635423   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:49:43.635599   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.635811   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.635970   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:49:43.636167   29021 main.go:141] libmachine: Using SSH client type: native
	I0729 10:49:43.636337   29021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:49:43.636350   29021 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-763049 && echo "ha-763049" | sudo tee /etc/hostname
	I0729 10:49:43.763357   29021 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-763049
	
	I0729 10:49:43.763389   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:43.766256   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.766622   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:43.766645   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.766846   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:49:43.767049   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.767202   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:43.767343   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:49:43.767509   29021 main.go:141] libmachine: Using SSH client type: native
	I0729 10:49:43.767713   29021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:49:43.767737   29021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-763049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-763049/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-763049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:49:43.872103   29021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:49:43.872136   29021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 10:49:43.872177   29021 buildroot.go:174] setting up certificates
	I0729 10:49:43.872188   29021 provision.go:84] configureAuth start
	I0729 10:49:43.872200   29021 main.go:141] libmachine: (ha-763049) Calling .GetMachineName
	I0729 10:49:43.872475   29021 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:49:43.875020   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.875413   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:43.875436   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.875550   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:43.877616   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.877953   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:43.877972   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:43.878188   29021 provision.go:143] copyHostCerts
	I0729 10:49:43.878219   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:49:43.878250   29021 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 10:49:43.878259   29021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 10:49:43.878325   29021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 10:49:43.878422   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:49:43.878440   29021 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 10:49:43.878444   29021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 10:49:43.878468   29021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 10:49:43.878521   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:49:43.878541   29021 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 10:49:43.878547   29021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 10:49:43.878569   29021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 10:49:43.878628   29021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.ha-763049 san=[127.0.0.1 192.168.39.68 ha-763049 localhost minikube]
	I0729 10:49:44.003044   29021 provision.go:177] copyRemoteCerts
	I0729 10:49:44.003109   29021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:49:44.003138   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:44.006115   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:44.006649   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:44.006680   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:44.006927   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:49:44.007122   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:44.007323   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:49:44.007508   29021 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:49:44.089699   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 10:49:44.089778   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:49:44.117665   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 10:49:44.117740   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 10:49:44.145917   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 10:49:44.145990   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:49:44.178252   29021 provision.go:87] duration metric: took 306.049751ms to configureAuth
	I0729 10:49:44.178284   29021 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:49:44.178502   29021 config.go:182] Loaded profile config "ha-763049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:49:44.178565   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:49:44.181419   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:44.181847   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:49:44.181891   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:49:44.182100   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:49:44.182278   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:44.182426   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:49:44.182537   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:49:44.182692   29021 main.go:141] libmachine: Using SSH client type: native
	I0729 10:49:44.182904   29021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:49:44.182923   29021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:51:14.997743   29021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:51:14.997772   29021 machine.go:97] duration metric: took 1m31.479254237s to provisionDockerMachine
	I0729 10:51:14.997783   29021 start.go:293] postStartSetup for "ha-763049" (driver="kvm2")
	I0729 10:51:14.997794   29021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:51:14.997809   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:14.998183   29021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:51:14.998219   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:51:15.001685   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.002160   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.002186   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.002325   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:51:15.002518   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.002720   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:51:15.002846   29021 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:51:15.086913   29021 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:51:15.091315   29021 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:51:15.091338   29021 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 10:51:15.091395   29021 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 10:51:15.091475   29021 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 10:51:15.091486   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /etc/ssl/certs/110642.pem
	I0729 10:51:15.091574   29021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:51:15.101166   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:51:15.129390   29021 start.go:296] duration metric: took 131.592211ms for postStartSetup
	I0729 10:51:15.129434   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:15.129736   29021 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 10:51:15.129761   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:51:15.132347   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.132701   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.132726   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.132859   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:51:15.133023   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.133202   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:51:15.133393   29021 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	W0729 10:51:15.217266   29021 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 10:51:15.217293   29021 fix.go:56] duration metric: took 1m31.719573082s for fixHost
	I0729 10:51:15.217317   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:51:15.219860   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.220222   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.220239   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.220435   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:51:15.220645   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.220865   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.221001   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:51:15.221210   29021 main.go:141] libmachine: Using SSH client type: native
	I0729 10:51:15.221369   29021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 10:51:15.221379   29021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:51:15.323975   29021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722250275.289847121
	
	I0729 10:51:15.324004   29021 fix.go:216] guest clock: 1722250275.289847121
	I0729 10:51:15.324015   29021 fix.go:229] Guest: 2024-07-29 10:51:15.289847121 +0000 UTC Remote: 2024-07-29 10:51:15.21730072 +0000 UTC m=+91.842665800 (delta=72.546401ms)
	I0729 10:51:15.324042   29021 fix.go:200] guest clock delta is within tolerance: 72.546401ms
	I0729 10:51:15.324047   29021 start.go:83] releasing machines lock for "ha-763049", held for 1m31.826344747s
	I0729 10:51:15.324081   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:15.324369   29021 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:51:15.326727   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.327061   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.327084   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.327229   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:15.327746   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:15.327955   29021 main.go:141] libmachine: (ha-763049) Calling .DriverName
	I0729 10:51:15.328073   29021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:51:15.328116   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:51:15.328156   29021 ssh_runner.go:195] Run: cat /version.json
	I0729 10:51:15.328175   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHHostname
	I0729 10:51:15.330765   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.331029   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.331195   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.331219   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.331332   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:51:15.331445   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:15.331465   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:15.331522   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.331672   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHPort
	I0729 10:51:15.331690   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:51:15.331853   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHKeyPath
	I0729 10:51:15.331877   29021 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:51:15.332014   29021 main.go:141] libmachine: (ha-763049) Calling .GetSSHUsername
	I0729 10:51:15.332166   29021 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/ha-763049/id_rsa Username:docker}
	I0729 10:51:15.432315   29021 ssh_runner.go:195] Run: systemctl --version
	I0729 10:51:15.438800   29021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:51:15.600587   29021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:51:15.612066   29021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:51:15.612139   29021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:51:15.623504   29021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 10:51:15.623531   29021 start.go:495] detecting cgroup driver to use...
	I0729 10:51:15.623591   29021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:51:15.641036   29021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:51:15.656481   29021 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:51:15.656547   29021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:51:15.671820   29021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:51:15.687239   29021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:51:15.849582   29021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:51:16.001986   29021 docker.go:233] disabling docker service ...
	I0729 10:51:16.002044   29021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:51:16.019870   29021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:51:16.035215   29021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:51:16.197283   29021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:51:16.347969   29021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:51:16.363148   29021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:51:16.383050   29021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:51:16.383116   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.395365   29021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:51:16.395446   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.408575   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.420122   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.431743   29021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:51:16.443214   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.454061   29021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.464925   29021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:51:16.476057   29021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:51:16.486249   29021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:51:16.496322   29021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:51:16.643776   29021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:51:25.432305   29021 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.788500054s)
	I0729 10:51:25.432335   29021 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:51:25.432395   29021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:51:25.437669   29021 start.go:563] Will wait 60s for crictl version
	I0729 10:51:25.437728   29021 ssh_runner.go:195] Run: which crictl
	I0729 10:51:25.441709   29021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:51:25.482903   29021 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:51:25.482984   29021 ssh_runner.go:195] Run: crio --version
	I0729 10:51:25.513327   29021 ssh_runner.go:195] Run: crio --version
	I0729 10:51:25.546814   29021 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:51:25.548257   29021 main.go:141] libmachine: (ha-763049) Calling .GetIP
	I0729 10:51:25.550986   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:25.551437   29021 main.go:141] libmachine: (ha-763049) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:89:08", ip: ""} in network mk-ha-763049: {Iface:virbr1 ExpiryTime:2024-07-29 11:39:06 +0000 UTC Type:0 Mac:52:54:00:6d:89:08 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-763049 Clientid:01:52:54:00:6d:89:08}
	I0729 10:51:25.551469   29021 main.go:141] libmachine: (ha-763049) DBG | domain ha-763049 has defined IP address 192.168.39.68 and MAC address 52:54:00:6d:89:08 in network mk-ha-763049
	I0729 10:51:25.551675   29021 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:51:25.556784   29021 kubeadm.go:883] updating cluster {Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 10:51:25.556954   29021 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:51:25.557015   29021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:51:25.605052   29021 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 10:51:25.605074   29021 crio.go:433] Images already preloaded, skipping extraction
	I0729 10:51:25.605146   29021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:51:25.644156   29021 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 10:51:25.644176   29021 cache_images.go:84] Images are preloaded, skipping loading
	I0729 10:51:25.644188   29021 kubeadm.go:934] updating node { 192.168.39.68 8443 v1.30.3 crio true true} ...
	I0729 10:51:25.644314   29021 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-763049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:51:25.644391   29021 ssh_runner.go:195] Run: crio config
	I0729 10:51:25.698281   29021 cni.go:84] Creating CNI manager for ""
	I0729 10:51:25.698298   29021 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 10:51:25.698309   29021 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:51:25.698332   29021 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-763049 NodeName:ha-763049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:51:25.698474   29021 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-763049"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:51:25.698493   29021 kube-vip.go:115] generating kube-vip config ...
	I0729 10:51:25.698542   29021 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 10:51:25.710759   29021 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 10:51:25.710873   29021 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 10:51:25.710931   29021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:51:25.720497   29021 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:51:25.720550   29021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 10:51:25.730153   29021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0729 10:51:25.746857   29021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:51:25.763705   29021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0729 10:51:25.781921   29021 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 10:51:25.800129   29021 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 10:51:25.806251   29021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:51:25.951402   29021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:51:25.967619   29021 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049 for IP: 192.168.39.68
	I0729 10:51:25.967643   29021 certs.go:194] generating shared ca certs ...
	I0729 10:51:25.967660   29021 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:51:25.967862   29021 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 10:51:25.967916   29021 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 10:51:25.967930   29021 certs.go:256] generating profile certs ...
	I0729 10:51:25.968023   29021 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/client.key
	I0729 10:51:25.968056   29021 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.54ec16aa
	I0729 10:51:25.968079   29021 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.54ec16aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.39 192.168.39.123 192.168.39.254]
	I0729 10:51:26.180556   29021 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.54ec16aa ...
	I0729 10:51:26.180591   29021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.54ec16aa: {Name:mk6a9fc39645df1b02a8b78f419d064af5259f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:51:26.180829   29021 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.54ec16aa ...
	I0729 10:51:26.180848   29021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.54ec16aa: {Name:mk546957a28a9db418ab3f39b372c82f974b2492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:51:26.180955   29021 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt.54ec16aa -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt
	I0729 10:51:26.181187   29021 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key.54ec16aa -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key
	I0729 10:51:26.181384   29021 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key
	I0729 10:51:26.181405   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 10:51:26.181424   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 10:51:26.181446   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 10:51:26.181467   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 10:51:26.181487   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 10:51:26.181504   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 10:51:26.181524   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 10:51:26.181544   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 10:51:26.181621   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 10:51:26.181689   29021 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 10:51:26.181713   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:51:26.181754   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:51:26.181799   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:51:26.181826   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 10:51:26.181870   29021 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 10:51:26.181902   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /usr/share/ca-certificates/110642.pem
	I0729 10:51:26.181918   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:51:26.181933   29021 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem -> /usr/share/ca-certificates/11064.pem
	I0729 10:51:26.182558   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:51:26.208690   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:51:26.233466   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:51:26.258571   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:51:26.283391   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 10:51:26.307793   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 10:51:26.335993   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:51:26.361125   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/ha-763049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:51:26.387309   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 10:51:26.412177   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:51:26.436000   29021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 10:51:26.460054   29021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:51:26.476879   29021 ssh_runner.go:195] Run: openssl version
	I0729 10:51:26.482974   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 10:51:26.495340   29021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 10:51:26.499977   29021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 10:51:26.500031   29021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 10:51:26.505960   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:51:26.516412   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:51:26.527968   29021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:51:26.533331   29021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:51:26.533421   29021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:51:26.539370   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:51:26.549548   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 10:51:26.561288   29021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 10:51:26.566383   29021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 10:51:26.566455   29021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 10:51:26.572999   29021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 10:51:26.583738   29021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:51:26.589031   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 10:51:26.594912   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 10:51:26.600785   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 10:51:26.606555   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 10:51:26.613016   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 10:51:26.618630   29021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 10:51:26.624347   29021 kubeadm.go:392] StartCluster: {Name:ha-763049 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-763049 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:51:26.624440   29021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 10:51:26.624484   29021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 10:51:26.668986   29021 cri.go:89] found id: "4b419adf4fad5381eaa65217b85d9a35efd45639dc697b36bdebf94a0832fec8"
	I0729 10:51:26.669010   29021 cri.go:89] found id: "b062f7f05731f65d65d798709f57fe001b52fd30f1270d06e7e8719d12006bc5"
	I0729 10:51:26.669016   29021 cri.go:89] found id: "8bd4ba6b4b03e1d22280d32240e644343933942733ec2790b4c7fa429beecf53"
	I0729 10:51:26.669020   29021 cri.go:89] found id: "5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5"
	I0729 10:51:26.669025   29021 cri.go:89] found id: "d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28"
	I0729 10:51:26.669030   29021 cri.go:89] found id: "752618ed171ab65081f77e911b1930e663f822fcefa7437c0eb6deaf4d1f8b47"
	I0729 10:51:26.669032   29021 cri.go:89] found id: "d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa"
	I0729 10:51:26.669035   29021 cri.go:89] found id: "db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8"
	I0729 10:51:26.669037   29021 cri.go:89] found id: "25081f768fa7cc5212989389e997b2cdd99e74c11242c404aafe702945a0d8d9"
	I0729 10:51:26.669043   29021 cri.go:89] found id: "46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8"
	I0729 10:51:26.669046   29021 cri.go:89] found id: "c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3"
	I0729 10:51:26.669048   29021 cri.go:89] found id: "e1dddce207d23b43b156f66ea7bce78adf53714cd300b6051510e4b90a74f804"
	I0729 10:51:26.669051   29021 cri.go:89] found id: "5a0bf98403fc73df262284424a6fe9a7c8ba7429941f8982faeed12ef5c2022d"
	I0729 10:51:26.669054   29021 cri.go:89] found id: ""
	I0729 10:51:26.669096   29021 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.627919456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250601627892266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40f03243-347d-4216-9136-f4f0752c8d12 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.628623153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5719784f-bed6-4dba-9b5b-66c62cd7d6c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.628679638Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5719784f-bed6-4dba-9b5b-66c62cd7d6c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.629290614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:431b340150506496903eb417e75c50c78ae3187fa69a468338efb2944fe1d43b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722250374257341609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09674fa79f4c54a1bfa82578aab9e18a1d7c138f0df16c700103bc57444b8338,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722250334248363965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b20547718a252aad997ed2c0df2cae435f48691718e9f4446ab41c7fefc3519,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722250333234561159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722250331232921136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873252844d658078505b8e38cf730f312ac6550ffd57dd796992a5f162ca4534,PodSandboxId:c0bdf4544262177797e9def41310b74b83a8fed37aa0dee26d44b2da3f1e3488,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722250325527581513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd55ad825ed2ddcac9a7df2d88feb40e8e07e3a31b4cb6e4977f437446654b7,PodSandboxId:356d97ec95395ab230bc31eb363573ffb7b583c349211e50763ef16c3192d26f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722250306123883816,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51297b7fbd0a51dd85a39ef1bcc68a3d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449897d6adc73eae51d5ff58c89a89fb21ef06566ca3706ebf539c2e1db07b7,PodSandboxId:5f62a1e4a4b35d2ce119595870f31e5939bf73f3fda1afaec9010f2c11534729,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722250293512694427,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8a03b1375072888e734a6644c6bad91c3d71fc0f679dbf3ffa29a8e69acf7645,PodSandboxId:229b61b75dafc1d4d22a7754b2b28bdf994bb033f45b30ac31ca37488497144f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722250292474541319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2916a63d
5ed377ceeacdd29b537026dd25dbd2c1ec74e8ba3893a3890e664348,PodSandboxId:44713f32a945a1e930a5899b608fb85f67501c23dc145a18c3879e8fcefefffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292353385648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838e0d2cb45ba65482ead03da1dbbe66eb325461f487e965da0988cb30e34825,PodSandboxId:d80cc6c195465d37ccf6d82b1db7afca0f85c84ff2bfa863726f4a70d3fb231b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292326267190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7396dba0659812fa98e24a3bd4cf548afa87c5e6c85d3be56f89719d5fad6177,PodSandboxId:8fbeb2be906091a7e3fcd3e31e9df405beaa03e87a4d33fea6e926d19e7c3ab1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722250292134340916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a72aa191af240183804ae6c9a099f1718531bb8abe5eb81b8d356a35b4d93cb,PodSandboxId:8abf83da8b028164227844ed02251b0b5e140350b923ae5259d18cf1e80cbad2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722250292021082529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407f
b9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc91bcec9ad6c6a181434711095acd4a32d53d7c61281c2221fafbfcb2d88c8,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722250292162735878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722250292048328822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722249780144234457,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annot
ations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606596651642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606590934622,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722249594537412852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722249590199147101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722249568437102383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722249568433332037,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5719784f-bed6-4dba-9b5b-66c62cd7d6c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.674735814Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=318d4803-d45e-4f52-9c46-79e180347c96 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.674857800Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=318d4803-d45e-4f52-9c46-79e180347c96 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.676068667Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afc3f1d5-2c28-4eb2-958c-bc97a7563ed2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.676580291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250601676542043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afc3f1d5-2c28-4eb2-958c-bc97a7563ed2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.677294297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5043a8f5-7e50-4e9c-a90f-117d0a44b5bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.677352324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5043a8f5-7e50-4e9c-a90f-117d0a44b5bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.677799934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:431b340150506496903eb417e75c50c78ae3187fa69a468338efb2944fe1d43b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722250374257341609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09674fa79f4c54a1bfa82578aab9e18a1d7c138f0df16c700103bc57444b8338,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722250334248363965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b20547718a252aad997ed2c0df2cae435f48691718e9f4446ab41c7fefc3519,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722250333234561159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722250331232921136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873252844d658078505b8e38cf730f312ac6550ffd57dd796992a5f162ca4534,PodSandboxId:c0bdf4544262177797e9def41310b74b83a8fed37aa0dee26d44b2da3f1e3488,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722250325527581513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd55ad825ed2ddcac9a7df2d88feb40e8e07e3a31b4cb6e4977f437446654b7,PodSandboxId:356d97ec95395ab230bc31eb363573ffb7b583c349211e50763ef16c3192d26f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722250306123883816,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51297b7fbd0a51dd85a39ef1bcc68a3d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449897d6adc73eae51d5ff58c89a89fb21ef06566ca3706ebf539c2e1db07b7,PodSandboxId:5f62a1e4a4b35d2ce119595870f31e5939bf73f3fda1afaec9010f2c11534729,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722250293512694427,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8a03b1375072888e734a6644c6bad91c3d71fc0f679dbf3ffa29a8e69acf7645,PodSandboxId:229b61b75dafc1d4d22a7754b2b28bdf994bb033f45b30ac31ca37488497144f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722250292474541319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2916a63d
5ed377ceeacdd29b537026dd25dbd2c1ec74e8ba3893a3890e664348,PodSandboxId:44713f32a945a1e930a5899b608fb85f67501c23dc145a18c3879e8fcefefffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292353385648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838e0d2cb45ba65482ead03da1dbbe66eb325461f487e965da0988cb30e34825,PodSandboxId:d80cc6c195465d37ccf6d82b1db7afca0f85c84ff2bfa863726f4a70d3fb231b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292326267190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7396dba0659812fa98e24a3bd4cf548afa87c5e6c85d3be56f89719d5fad6177,PodSandboxId:8fbeb2be906091a7e3fcd3e31e9df405beaa03e87a4d33fea6e926d19e7c3ab1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722250292134340916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a72aa191af240183804ae6c9a099f1718531bb8abe5eb81b8d356a35b4d93cb,PodSandboxId:8abf83da8b028164227844ed02251b0b5e140350b923ae5259d18cf1e80cbad2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722250292021082529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407f
b9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc91bcec9ad6c6a181434711095acd4a32d53d7c61281c2221fafbfcb2d88c8,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722250292162735878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722250292048328822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722249780144234457,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annot
ations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606596651642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606590934622,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722249594537412852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722249590199147101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722249568437102383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722249568433332037,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5043a8f5-7e50-4e9c-a90f-117d0a44b5bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.724618656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f5d5c43-9ec4-4cf7-ae36-dbc6538e8c86 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.724828286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f5d5c43-9ec4-4cf7-ae36-dbc6538e8c86 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.726000759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37eb0ffc-5594-4f45-b2ec-1816ec62fee3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.726447767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250601726423195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37eb0ffc-5594-4f45-b2ec-1816ec62fee3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.727029365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fb3acae-0fb4-4967-a3cd-6354170690d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.727107481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fb3acae-0fb4-4967-a3cd-6354170690d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.727495400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:431b340150506496903eb417e75c50c78ae3187fa69a468338efb2944fe1d43b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722250374257341609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09674fa79f4c54a1bfa82578aab9e18a1d7c138f0df16c700103bc57444b8338,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722250334248363965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b20547718a252aad997ed2c0df2cae435f48691718e9f4446ab41c7fefc3519,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722250333234561159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722250331232921136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873252844d658078505b8e38cf730f312ac6550ffd57dd796992a5f162ca4534,PodSandboxId:c0bdf4544262177797e9def41310b74b83a8fed37aa0dee26d44b2da3f1e3488,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722250325527581513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd55ad825ed2ddcac9a7df2d88feb40e8e07e3a31b4cb6e4977f437446654b7,PodSandboxId:356d97ec95395ab230bc31eb363573ffb7b583c349211e50763ef16c3192d26f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722250306123883816,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51297b7fbd0a51dd85a39ef1bcc68a3d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449897d6adc73eae51d5ff58c89a89fb21ef06566ca3706ebf539c2e1db07b7,PodSandboxId:5f62a1e4a4b35d2ce119595870f31e5939bf73f3fda1afaec9010f2c11534729,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722250293512694427,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8a03b1375072888e734a6644c6bad91c3d71fc0f679dbf3ffa29a8e69acf7645,PodSandboxId:229b61b75dafc1d4d22a7754b2b28bdf994bb033f45b30ac31ca37488497144f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722250292474541319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2916a63d
5ed377ceeacdd29b537026dd25dbd2c1ec74e8ba3893a3890e664348,PodSandboxId:44713f32a945a1e930a5899b608fb85f67501c23dc145a18c3879e8fcefefffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292353385648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838e0d2cb45ba65482ead03da1dbbe66eb325461f487e965da0988cb30e34825,PodSandboxId:d80cc6c195465d37ccf6d82b1db7afca0f85c84ff2bfa863726f4a70d3fb231b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292326267190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7396dba0659812fa98e24a3bd4cf548afa87c5e6c85d3be56f89719d5fad6177,PodSandboxId:8fbeb2be906091a7e3fcd3e31e9df405beaa03e87a4d33fea6e926d19e7c3ab1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722250292134340916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a72aa191af240183804ae6c9a099f1718531bb8abe5eb81b8d356a35b4d93cb,PodSandboxId:8abf83da8b028164227844ed02251b0b5e140350b923ae5259d18cf1e80cbad2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722250292021082529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407f
b9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc91bcec9ad6c6a181434711095acd4a32d53d7c61281c2221fafbfcb2d88c8,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722250292162735878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722250292048328822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722249780144234457,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annot
ations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606596651642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606590934622,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722249594537412852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722249590199147101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722249568437102383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722249568433332037,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fb3acae-0fb4-4967-a3cd-6354170690d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.771624589Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=511db0f7-19be-4759-8b1a-5384d831eee9 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.771702667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=511db0f7-19be-4759-8b1a-5384d831eee9 name=/runtime.v1.RuntimeService/Version
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.772932943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa5dad48-cd67-4cf9-ac38-86065550e17e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.773598975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722250601773573129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa5dad48-cd67-4cf9-ac38-86065550e17e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.774789496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02b92110-b6a5-49fd-94c4-67929fe370f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.774854145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02b92110-b6a5-49fd-94c4-67929fe370f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 10:56:41 ha-763049 crio[3746]: time="2024-07-29 10:56:41.775266314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:431b340150506496903eb417e75c50c78ae3187fa69a468338efb2944fe1d43b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722250374257341609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09674fa79f4c54a1bfa82578aab9e18a1d7c138f0df16c700103bc57444b8338,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722250334248363965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b20547718a252aad997ed2c0df2cae435f48691718e9f4446ab41c7fefc3519,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722250333234561159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotations:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b,PodSandboxId:efa540dfb29868936d9cc742a361a740b15cddeeedb130cafd792b1232a6dda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722250331232921136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48db391-d5bb-4974-88d7-f5c71e3edb4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9d2daa69,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873252844d658078505b8e38cf730f312ac6550ffd57dd796992a5f162ca4534,PodSandboxId:c0bdf4544262177797e9def41310b74b83a8fed37aa0dee26d44b2da3f1e3488,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722250325527581513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annotations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd55ad825ed2ddcac9a7df2d88feb40e8e07e3a31b4cb6e4977f437446654b7,PodSandboxId:356d97ec95395ab230bc31eb363573ffb7b583c349211e50763ef16c3192d26f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722250306123883816,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51297b7fbd0a51dd85a39ef1bcc68a3d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449897d6adc73eae51d5ff58c89a89fb21ef06566ca3706ebf539c2e1db07b7,PodSandboxId:5f62a1e4a4b35d2ce119595870f31e5939bf73f3fda1afaec9010f2c11534729,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722250293512694427,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8a03b1375072888e734a6644c6bad91c3d71fc0f679dbf3ffa29a8e69acf7645,PodSandboxId:229b61b75dafc1d4d22a7754b2b28bdf994bb033f45b30ac31ca37488497144f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722250292474541319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2916a63d
5ed377ceeacdd29b537026dd25dbd2c1ec74e8ba3893a3890e664348,PodSandboxId:44713f32a945a1e930a5899b608fb85f67501c23dc145a18c3879e8fcefefffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292353385648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838e0d2cb45ba65482ead03da1dbbe66eb325461f487e965da0988cb30e34825,PodSandboxId:d80cc6c195465d37ccf6d82b1db7afca0f85c84ff2bfa863726f4a70d3fb231b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722250292326267190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7396dba0659812fa98e24a3bd4cf548afa87c5e6c85d3be56f89719d5fad6177,PodSandboxId:8fbeb2be906091a7e3fcd3e31e9df405beaa03e87a4d33fea6e926d19e7c3ab1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722250292134340916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a72aa191af240183804ae6c9a099f1718531bb8abe5eb81b8d356a35b4d93cb,PodSandboxId:8abf83da8b028164227844ed02251b0b5e140350b923ae5259d18cf1e80cbad2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722250292021082529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407f
b9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc91bcec9ad6c6a181434711095acd4a32d53d7c61281c2221fafbfcb2d88c8,PodSandboxId:aadb979b81e36ea6378f111a6e2c0a9347add075d1eda3b9bc8a6af7e30f3738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722250292162735878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6112007873a5488ffeba87ad2297372e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 80a12a73,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55,PodSandboxId:2f8e10cced82eab99ccb5cdc286c8ddd7e7a2072a5865e6dc9301c33378024ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722250292048328822,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400e7dce88577de760f73261cae49d02,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbf3ef31451e134f652f219e69cbb200ceabec972875bfe5bbb378b318fdaf,PodSandboxId:317257a7e3939525882f817028b8c6152d40b5adf44b8576fbf407f2ea511b9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722249780144234457,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6s8vm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58d0d09-9e1d-4e80-917d-92b1264a6609,},Annot
ations:map[string]string{io.kubernetes.container.hash: b1ec9d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5,PodSandboxId:83fa37df3ce809d4be7d11867b59cecb4c2efdcd8ce8f2e784999c60d1bb8e9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606596651642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xxwnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76efda45-4871-46fb-8a27-2e94f75de9f4,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3a4e3658,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28,PodSandboxId:1b78edf6f66dc30aa5f4f9d26ae11fb9e11ae3e4ea9bff4a2c256ac21b3683aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722249606590934622,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l4n5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f32893-3406-4eed-990f-f490efab94d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8e7d1ac8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa,PodSandboxId:ba95977795c59d9fd3c6b176b2ade309671f7d3564b899edcbc7cf2429d417f9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722249594537412852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fdmh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed222fa-9517-42bb-bbde-6632f91bda05,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7a2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8,PodSandboxId:374e9c4294dfbb9b0af51bbc8d288b3ce5f95c6e06917b8e0510a38e810c8490,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722249590199147101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mhbk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b05b91ac-ef64-4bd2-9824-83723bddfef7,},Annotations:map[string]string{io.kubernetes.container.hash: c8457896,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8,PodSandboxId:d0a2c28776819f37c825d0ab87363378bdd7bc99e55246036cd5a4f889430cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722249568437102383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81a67c9133ee28f571d62ecf0564ce,},Annotations:map[string]string{io.kubernetes.container.hash: fd69a6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3,PodSandboxId:d4383fe572e51363d8158185d3cd251d92b4ce4f6675959bde82447efdd8d9b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722249568433332037,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-763049,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4c95936a93178bab407fb9d8697650f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02b92110-b6a5-49fd-94c4-67929fe370f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	431b340150506       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   efa540dfb2986       storage-provisioner
	09674fa79f4c5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   2f8e10cced82e       kube-controller-manager-ha-763049
	5b20547718a25       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   aadb979b81e36       kube-apiserver-ha-763049
	381606df0bdae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   efa540dfb2986       storage-provisioner
	873252844d658       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   c0bdf45442621       busybox-fc5497c4f-6s8vm
	6cd55ad825ed2       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   356d97ec95395       kube-vip-ha-763049
	e449897d6adc7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   5f62a1e4a4b35       kube-proxy-mhbk7
	8a03b13750728       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   229b61b75dafc       kindnet-fdmh5
	2916a63d5ed37       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   44713f32a945a       coredns-7db6d8ff4d-xxwnd
	838e0d2cb45ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   d80cc6c195465       coredns-7db6d8ff4d-l4n5p
	dbc91bcec9ad6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   aadb979b81e36       kube-apiserver-ha-763049
	7396dba065981       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   8fbeb2be90609       etcd-ha-763049
	e87f4671f1e6a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   2f8e10cced82e       kube-controller-manager-ha-763049
	3a72aa191af24       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   8abf83da8b028       kube-scheduler-ha-763049
	b1cbf3ef31451       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   317257a7e3939       busybox-fc5497c4f-6s8vm
	5d7c5ba61589d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   83fa37df3ce80       coredns-7db6d8ff4d-xxwnd
	d2f12f3773838       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   1b78edf6f66dc       coredns-7db6d8ff4d-l4n5p
	d9b83381cff6c       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   ba95977795c59       kindnet-fdmh5
	db640a7c00be2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   374e9c4294dfb       kube-proxy-mhbk7
	46540b0fd864e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   d0a2c28776819       etcd-ha-763049
	c31bbb31aa5f3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      17 minutes ago      Exited              kube-scheduler            0                   d4383fe572e51       kube-scheduler-ha-763049
	
	
	==> coredns [2916a63d5ed377ceeacdd29b537026dd25dbd2c1ec74e8ba3893a3890e664348] <==
	Trace[927548521]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50994->10.96.0.1:443: read: connection reset by peer 10521ms (10:51:54.413)
	Trace[927548521]: [10.521571048s] [10.521571048s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50994->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [5d7c5ba61589d7988b28e01ef6345c2ecb0df9dbfc80eb39bf544c077036c6e5] <==
	[INFO] 10.244.1.2:43934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123259s
	[INFO] 10.244.1.2:52875 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143409s
	[INFO] 10.244.1.2:46242 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001561535s
	[INFO] 10.244.1.2:50316 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101745s
	[INFO] 10.244.1.2:44298 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140588s
	[INFO] 10.244.1.2:41448 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158036s
	[INFO] 10.244.0.4:38730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084044s
	[INFO] 10.244.0.4:57968 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085926s
	[INFO] 10.244.0.4:42578 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062705s
	[INFO] 10.244.2.2:38441 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139508s
	[INFO] 10.244.2.2:50163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168308s
	[INFO] 10.244.1.2:42467 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125757s
	[INFO] 10.244.1.2:39047 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140115s
	[INFO] 10.244.1.2:37057 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091358s
	[INFO] 10.244.0.4:60045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128601s
	[INFO] 10.244.0.4:32850 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078977s
	[INFO] 10.244.2.2:46995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149775s
	[INFO] 10.244.2.2:60584 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126839s
	[INFO] 10.244.2.2:54400 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169256s
	[INFO] 10.244.1.2:44674 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109219s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [838e0d2cb45ba65482ead03da1dbbe66eb325461f487e965da0988cb30e34825] <==
	Trace[1032700112]: [10.357525637s] [10.357525637s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:58234->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d2f12f377383855d60513a2953a42fb97c034738f595362287ebe39cc4c9df28] <==
	[INFO] 10.244.0.4:60709 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157374s
	[INFO] 10.244.0.4:54900 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012654s
	[INFO] 10.244.0.4:45290 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152164s
	[INFO] 10.244.2.2:52050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184737s
	[INFO] 10.244.2.2:53059 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002292108s
	[INFO] 10.244.2.2:42700 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122981s
	[INFO] 10.244.2.2:44006 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001526846s
	[INFO] 10.244.2.2:41802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169435s
	[INFO] 10.244.1.2:49560 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002026135s
	[INFO] 10.244.1.2:49037 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111642s
	[INFO] 10.244.0.4:56631 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201091s
	[INFO] 10.244.2.2:47071 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000231291s
	[INFO] 10.244.2.2:53040 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132462s
	[INFO] 10.244.1.2:50475 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008294s
	[INFO] 10.244.0.4:60819 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000157328s
	[INFO] 10.244.0.4:41267 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078502s
	[INFO] 10.244.2.2:59469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127405s
	[INFO] 10.244.1.2:46106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125503s
	[INFO] 10.244.1.2:58330 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150547s
	[INFO] 10.244.1.2:40880 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136941s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-763049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T10_39_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:39:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:56:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:52:26 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:52:26 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:52:26 +0000   Mon, 29 Jul 2024 10:39:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:52:26 +0000   Mon, 29 Jul 2024 10:40:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-763049
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03aa097434f1466280c9076799e841fb
	  System UUID:                03aa0974-34f1-4662-80c9-076799e841fb
	  Boot ID:                    efb539a5-e8b0-4a05-a8f7-bc957e281bdb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6s8vm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-l4n5p             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-xxwnd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-763049                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-fdmh5                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-763049             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-763049    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-mhbk7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-763049             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-763049                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 16m    kube-proxy       
	  Normal   Starting                 4m25s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m    kubelet          Node ha-763049 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m    kubelet          Node ha-763049 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m    kubelet          Node ha-763049 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m    node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal   NodeReady                16m    kubelet          Node ha-763049 status is now: NodeReady
	  Normal   RegisteredNode           15m    node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Warning  ContainerGCFailed        6m8s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m19s  node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal   RegisteredNode           4m14s  node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	  Normal   RegisteredNode           3m12s  node-controller  Node ha-763049 event: Registered Node ha-763049 in Controller
	
	
	Name:               ha-763049-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_41_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:41:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:56:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:52:57 +0000   Mon, 29 Jul 2024 10:52:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:52:57 +0000   Mon, 29 Jul 2024 10:52:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:52:57 +0000   Mon, 29 Jul 2024 10:52:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:52:57 +0000   Mon, 29 Jul 2024 10:52:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-763049-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa1e337eb2824257a3354f0f8d3704f1
	  System UUID:                fa1e337e-b282-4257-a335-4f0f8d3704f1
	  Boot ID:                    61e7a7a7-febc-4407-904c-0af73a5ab9b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v8wqv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-763049-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-596ll                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-763049-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-763049-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-tf7wt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-763049-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-763049-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-763049-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-763049-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-763049-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-763049-m02 status is now: NodeNotReady
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node ha-763049-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node ha-763049-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node ha-763049-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-763049-m02 event: Registered Node ha-763049-m02 in Controller
	
	
	Name:               ha-763049-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-763049-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=ha-763049
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T10_43_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:43:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-763049-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:54:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 10:53:53 +0000   Mon, 29 Jul 2024 10:54:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 10:53:53 +0000   Mon, 29 Jul 2024 10:54:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 10:53:53 +0000   Mon, 29 Jul 2024 10:54:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 10:53:53 +0000   Mon, 29 Jul 2024 10:54:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-763049-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 656290ccdc044847a0820e68660df2c3
	  System UUID:                656290cc-dc04-4847-a082-0e68660df2c3
	  Boot ID:                    36d93227-9c4c-4b8c-ae3c-8178d24bafd5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2l4sv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-fq6mz              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-9d6sv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-763049-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-763049-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-763049-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-763049-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m19s                  node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   RegisteredNode           4m14s                  node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-763049-m04 event: Registered Node ha-763049-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node ha-763049-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node ha-763049-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node ha-763049-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-763049-m04 has been rebooted, boot id: 36d93227-9c4c-4b8c-ae3c-8178d24bafd5
	  Normal   NodeReady                2m49s                  kubelet          Node ha-763049-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s (x2 over 3m39s)   node-controller  Node ha-763049-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055472] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054857] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.202085] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.132761] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.281350] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.343760] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.067157] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.957567] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +1.681727] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.722604] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.080303] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.543544] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.094019] kauditd_printk_skb: 29 callbacks suppressed
	[Jul29 10:41] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 10:51] systemd-fstab-generator[3664]: Ignoring "noauto" option for root device
	[  +0.168774] systemd-fstab-generator[3676]: Ignoring "noauto" option for root device
	[  +0.190916] systemd-fstab-generator[3690]: Ignoring "noauto" option for root device
	[  +0.150637] systemd-fstab-generator[3702]: Ignoring "noauto" option for root device
	[  +0.300397] systemd-fstab-generator[3731]: Ignoring "noauto" option for root device
	[  +9.303302] systemd-fstab-generator[3832]: Ignoring "noauto" option for root device
	[  +0.087720] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.749374] kauditd_printk_skb: 12 callbacks suppressed
	[ +13.402963] kauditd_printk_skb: 97 callbacks suppressed
	[  +9.059006] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 10:52] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [46540b0fd864efc080b6d38f84267ab87be70742864844e1cde6ff79b4621ee8] <==
	2024/07/29 10:49:44 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-29T10:49:44.336393Z","caller":"traceutil/trace.go:171","msg":"trace[1868624084] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; }","duration":"463.472429ms","start":"2024-07-29T10:49:43.872918Z","end":"2024-07-29T10:49:44.336391Z","steps":["trace[1868624084] 'agreement among raft nodes before linearized reading'  (duration: 445.744056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T10:49:44.343599Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T10:49:43.872905Z","time spent":"470.678937ms","remote":"127.0.0.1:52622","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "}
	2024/07/29 10:49:44 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T10:49:44.36369Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4945956236695851233,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-29T10:49:44.452251Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T10:49:44.452398Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T10:49:44.452468Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"821abe7be15f44a3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T10:49:44.452839Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.452934Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.452996Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.453075Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.453151Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.453212Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.453253Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4b3c3506041f68d7"}
	{"level":"info","ts":"2024-07-29T10:49:44.453282Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.453318Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.45337Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.453509Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.453591Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.453649Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.453681Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:49:44.4574Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-07-29T10:49:44.457571Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-07-29T10:49:44.457606Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-763049","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"]}
	
	
	==> etcd [7396dba0659812fa98e24a3bd4cf548afa87c5e6c85d3be56f89719d5fad6177] <==
	{"level":"info","ts":"2024-07-29T10:53:12.495024Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:53:12.510174Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"821abe7be15f44a3","to":"7fcd25f19598e910","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T10:53:12.510223Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:53:12.514911Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"821abe7be15f44a3","to":"7fcd25f19598e910","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T10:53:12.515122Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"warn","ts":"2024-07-29T10:54:07.752697Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.123:45120","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-29T10:54:07.779606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 switched to configuration voters=(5421266351402477783 9375015013596480675)"}
	{"level":"info","ts":"2024-07-29T10:54:07.781885Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"68cd46418ae274f9","local-member-id":"821abe7be15f44a3","removed-remote-peer-id":"7fcd25f19598e910","removed-remote-peer-urls":["https://192.168.39.123:2380"]}
	{"level":"info","ts":"2024-07-29T10:54:07.782005Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7fcd25f19598e910"}
	{"level":"warn","ts":"2024-07-29T10:54:07.782076Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"821abe7be15f44a3","removed-member-id":"7fcd25f19598e910"}
	{"level":"warn","ts":"2024-07-29T10:54:07.782196Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-07-29T10:54:07.783032Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:54:07.783124Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7fcd25f19598e910"}
	{"level":"warn","ts":"2024-07-29T10:54:07.783581Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:54:07.783856Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:54:07.784047Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"warn","ts":"2024-07-29T10:54:07.784499Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T10:54:07.784805Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"7fcd25f19598e910","error":"failed to read 7fcd25f19598e910 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T10:54:07.784991Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"warn","ts":"2024-07-29T10:54:07.785418Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T10:54:07.785505Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:54:07.785696Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7fcd25f19598e910"}
	{"level":"info","ts":"2024-07-29T10:54:07.78578Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"821abe7be15f44a3","removed-remote-peer-id":"7fcd25f19598e910"}
	{"level":"warn","ts":"2024-07-29T10:54:07.795122Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"821abe7be15f44a3","remote-peer-id-stream-handler":"821abe7be15f44a3","remote-peer-id-from":"7fcd25f19598e910"}
	{"level":"warn","ts":"2024-07-29T10:54:07.810254Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"821abe7be15f44a3","remote-peer-id-stream-handler":"821abe7be15f44a3","remote-peer-id-from":"7fcd25f19598e910"}
	
	
	==> kernel <==
	 10:56:42 up 17 min,  0 users,  load average: 0.19, 0.24, 0.22
	Linux ha-763049 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8a03b1375072888e734a6644c6bad91c3d71fc0f679dbf3ffa29a8e69acf7645] <==
	I0729 10:55:53.488931       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:56:03.485211       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:56:03.485314       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:56:03.485536       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:56:03.485568       1 main.go:299] handling current node
	I0729 10:56:03.485590       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:56:03.485606       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:56:13.485644       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:56:13.485819       1 main.go:299] handling current node
	I0729 10:56:13.485871       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:56:13.485897       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:56:13.486045       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:56:13.486067       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:56:23.484935       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:56:23.485071       1 main.go:299] handling current node
	I0729 10:56:23.485103       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:56:23.485124       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:56:23.485290       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:56:23.485311       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:56:33.479059       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:56:33.479269       1 main.go:299] handling current node
	I0729 10:56:33.479318       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:56:33.479337       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:56:33.479517       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:56:33.479539       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d9b83381cff6c6e49447b548eb9b5c91a1958a8f763040151d7a0707673c56aa] <==
	I0729 10:49:15.640717       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:49:15.640811       1 main.go:299] handling current node
	I0729 10:49:15.640840       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:49:15.640847       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:49:15.641014       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:49:15.641021       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:49:15.641077       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:49:15.641101       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:49:25.641347       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:49:25.641510       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	I0729 10:49:25.641943       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:49:25.642051       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:49:25.642339       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:49:25.642373       1 main.go:299] handling current node
	I0729 10:49:25.642465       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:49:25.642490       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:49:35.641198       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 10:49:35.641545       1 main.go:322] Node ha-763049-m04 has CIDR [10.244.3.0/24] 
	I0729 10:49:35.641978       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 10:49:35.642063       1 main.go:299] handling current node
	I0729 10:49:35.642109       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0729 10:49:35.642182       1 main.go:322] Node ha-763049-m02 has CIDR [10.244.1.0/24] 
	I0729 10:49:35.642386       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0729 10:49:35.642416       1 main.go:322] Node ha-763049-m03 has CIDR [10.244.2.0/24] 
	E0729 10:49:42.660538       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [5b20547718a252aad997ed2c0df2cae435f48691718e9f4446ab41c7fefc3519] <==
	I0729 10:52:15.643691       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0729 10:52:15.645789       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0729 10:52:15.715669       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 10:52:15.715897       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 10:52:15.715961       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 10:52:15.717285       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 10:52:15.717721       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 10:52:15.722447       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 10:52:15.725834       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 10:52:15.731905       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 10:52:15.734691       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 10:52:15.734789       1 policy_source.go:224] refreshing policies
	W0729 10:52:15.736852       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.39]
	I0729 10:52:15.738212       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 10:52:15.746829       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 10:52:15.746901       1 aggregator.go:165] initial CRD sync complete...
	I0729 10:52:15.746943       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 10:52:15.746966       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 10:52:15.746989       1 cache.go:39] Caches are synced for autoregister controller
	I0729 10:52:15.750358       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 10:52:15.754912       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 10:52:15.813412       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 10:52:16.635287       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 10:52:17.271613       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.39 192.168.39.68]
	W0729 10:54:17.278334       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.39 192.168.39.68]
	
	
	==> kube-apiserver [dbc91bcec9ad6c6a181434711095acd4a32d53d7c61281c2221fafbfcb2d88c8] <==
	I0729 10:51:32.752466       1 options.go:221] external host was not specified, using 192.168.39.68
	I0729 10:51:32.754544       1 server.go:148] Version: v1.30.3
	I0729 10:51:32.754623       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:51:33.386728       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 10:51:33.392784       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 10:51:33.400014       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 10:51:33.400069       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 10:51:33.400303       1 instance.go:299] Using reconciler: lease
	W0729 10:51:53.382216       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 10:51:53.386657       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0729 10:51:53.406548       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0729 10:51:53.406567       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [09674fa79f4c54a1bfa82578aab9e18a1d7c138f0df16c700103bc57444b8338] <==
	I0729 10:53:53.544507       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-763049-m04"
	I0729 10:54:04.402002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.506401ms"
	I0729 10:54:04.475285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.209473ms"
	I0729 10:54:04.558446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.099568ms"
	I0729 10:54:04.600241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.739534ms"
	I0729 10:54:04.600352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.001µs"
	I0729 10:54:06.492284       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.539µs"
	I0729 10:54:06.728110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.407µs"
	I0729 10:54:06.742869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.847µs"
	I0729 10:54:06.748487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.886µs"
	I0729 10:54:07.902964       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.965207ms"
	I0729 10:54:07.903106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.217µs"
	I0729 10:54:19.469223       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-763049-m04"
	E0729 10:54:28.255132       1 gc_controller.go:153] "Failed to get node" err="node \"ha-763049-m03\" not found" logger="pod-garbage-collector-controller" node="ha-763049-m03"
	E0729 10:54:28.255219       1 gc_controller.go:153] "Failed to get node" err="node \"ha-763049-m03\" not found" logger="pod-garbage-collector-controller" node="ha-763049-m03"
	E0729 10:54:28.255228       1 gc_controller.go:153] "Failed to get node" err="node \"ha-763049-m03\" not found" logger="pod-garbage-collector-controller" node="ha-763049-m03"
	E0729 10:54:28.255233       1 gc_controller.go:153] "Failed to get node" err="node \"ha-763049-m03\" not found" logger="pod-garbage-collector-controller" node="ha-763049-m03"
	E0729 10:54:28.255238       1 gc_controller.go:153] "Failed to get node" err="node \"ha-763049-m03\" not found" logger="pod-garbage-collector-controller" node="ha-763049-m03"
	E0729 10:54:48.256059       1 gc_controller.go:153] "Failed to get node" err="node \"ha-763049-m03\" not found" logger="pod-garbage-collector-controller" node="ha-763049-m03"
	E0729 10:54:48.256121       1 gc_controller.go:153] "Failed to get node" err="node \"ha-763049-m03\" not found" logger="pod-garbage-collector-controller" node="ha-763049-m03"
	E0729 10:54:48.256132       1 gc_controller.go:153] "Failed to get node" err="node \"ha-763049-m03\" not found" logger="pod-garbage-collector-controller" node="ha-763049-m03"
	E0729 10:54:48.256137       1 gc_controller.go:153] "Failed to get node" err="node \"ha-763049-m03\" not found" logger="pod-garbage-collector-controller" node="ha-763049-m03"
	E0729 10:54:48.256142       1 gc_controller.go:153] "Failed to get node" err="node \"ha-763049-m03\" not found" logger="pod-garbage-collector-controller" node="ha-763049-m03"
	I0729 10:54:54.055832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.93215ms"
	I0729 10:54:54.055992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.745µs"
	
	
	==> kube-controller-manager [e87f4671f1e6a2cf496e97926987fc7a9e8453081c2bdccdaf12ad26b6180e55] <==
	I0729 10:51:33.399254       1 serving.go:380] Generated self-signed cert in-memory
	I0729 10:51:33.878113       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 10:51:33.878152       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:51:33.880204       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 10:51:33.881583       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 10:51:33.881968       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 10:51:33.882075       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 10:51:54.414530       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.68:8443/healthz\": dial tcp 192.168.39.68:8443: connect: connection refused"
	
	
	==> kube-proxy [db640a7c00be2a68483aa5dd0ff7657a138468bbf4c3956536dfb842c271eff8] <==
	E0729 10:48:32.481308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:35.552347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:35.553253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:35.554069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:35.554318       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:38.625002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:38.625057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:41.697040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:41.697342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:41.697506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:41.697557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:47.841791       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:47.841941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:57.059441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:48:57.059891       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:57.060111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:48:57.060166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:49:00.129139       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:49:00.129292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:49:18.563147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:49:18.563345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-763049&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:49:24.705238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:49:24.705293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 10:49:24.705452       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 10:49:24.705473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1946": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [e449897d6adc73eae51d5ff58c89a89fb21ef06566ca3706ebf539c2e1db07b7] <==
	E0729 10:51:36.801729       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 10:51:39.872806       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 10:51:42.945291       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 10:51:49.089801       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 10:51:58.304285       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 10:52:16.739936       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-763049\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 10:52:16.739993       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0729 10:52:16.955861       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 10:52:16.955964       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 10:52:16.955983       1 server_linux.go:165] "Using iptables Proxier"
	I0729 10:52:16.970994       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 10:52:16.971254       1 server.go:872] "Version info" version="v1.30.3"
	I0729 10:52:16.971285       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:52:16.973963       1 config.go:192] "Starting service config controller"
	I0729 10:52:16.974933       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 10:52:16.974972       1 config.go:101] "Starting endpoint slice config controller"
	I0729 10:52:16.975370       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 10:52:16.977817       1 config.go:319] "Starting node config controller"
	I0729 10:52:16.977909       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 10:52:17.075287       1 shared_informer.go:320] Caches are synced for service config
	I0729 10:52:17.077464       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 10:52:17.078115       1 shared_informer.go:320] Caches are synced for node config
	W0729 10:55:02.294335       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0729 10:55:02.294335       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0729 10:55:02.294369       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [3a72aa191af240183804ae6c9a099f1718531bb8abe5eb81b8d356a35b4d93cb] <==
	W0729 10:52:11.564400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:11.564501       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:11.799448       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.68:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:11.799514       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.68:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:11.879653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.68:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:11.879721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.68:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:12.256622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.68:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:12.256674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.68:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:12.921853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:12.921905       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:13.026258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.68:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:13.026323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.68:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:13.107436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.68:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0729 10:52:13.107503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.68:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0729 10:52:15.660189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 10:52:15.660331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 10:52:15.660563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 10:52:15.660632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 10:52:15.660892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 10:52:15.660972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 10:52:15.661218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 10:52:15.661306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 10:52:15.660928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 10:52:15.665857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0729 10:52:32.721702       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c31bbb31aa5f37a18ceea41cf2b6b983ecf0968e340db52fd631182c462dc2e3] <==
	W0729 10:49:39.070829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 10:49:39.070878       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 10:49:39.631520       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 10:49:39.631567       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 10:49:40.297236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 10:49:40.297343       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 10:49:40.334993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 10:49:40.335113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 10:49:40.419065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 10:49:40.419176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 10:49:40.454479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 10:49:40.454568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 10:49:40.494022       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 10:49:40.494156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 10:49:40.608038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 10:49:40.608090       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 10:49:40.725835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 10:49:40.725949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 10:49:41.151859       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 10:49:41.151902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 10:49:41.183165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 10:49:41.183217       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 10:49:41.895061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 10:49:41.895169       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 10:49:44.292065       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 10:52:54 ha-763049 kubelet[1375]: I0729 10:52:54.234819    1375 scope.go:117] "RemoveContainer" containerID="381606df0bdaebe51e799621c256d313ec9e81d1e77ff0f3c86fc7a36c83fd9b"
	Jul 29 10:53:00 ha-763049 kubelet[1375]: I0729 10:53:00.565384    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-6s8vm" podStartSLOduration=601.6788046 podStartE2EDuration="10m4.56529912s" podCreationTimestamp="2024-07-29 10:42:56 +0000 UTC" firstStartedPulling="2024-07-29 10:42:57.246528262 +0000 UTC m=+203.194715420" lastFinishedPulling="2024-07-29 10:43:00.133022777 +0000 UTC m=+206.081209940" observedRunningTime="2024-07-29 10:43:01.135394054 +0000 UTC m=+207.083581221" watchObservedRunningTime="2024-07-29 10:53:00.56529912 +0000 UTC m=+806.513486290"
	Jul 29 10:53:03 ha-763049 kubelet[1375]: I0729 10:53:03.221245    1375 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-763049" podUID="5f88bfd4-d887-4989-bf71-7a4459aa6655"
	Jul 29 10:53:03 ha-763049 kubelet[1375]: I0729 10:53:03.239598    1375 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-763049"
	Jul 29 10:53:04 ha-763049 kubelet[1375]: I0729 10:53:04.253030    1375 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-763049" podUID="5f88bfd4-d887-4989-bf71-7a4459aa6655"
	Jul 29 10:53:34 ha-763049 kubelet[1375]: E0729 10:53:34.239276    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:53:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:53:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:53:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:53:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:54:34 ha-763049 kubelet[1375]: E0729 10:54:34.241945    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:54:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:54:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:54:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:54:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:55:34 ha-763049 kubelet[1375]: E0729 10:55:34.239387    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:55:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:55:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:55:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:55:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 10:56:34 ha-763049 kubelet[1375]: E0729 10:56:34.240673    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 10:56:34 ha-763049 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 10:56:34 ha-763049 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 10:56:34 ha-763049 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 10:56:34 ha-763049 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 10:56:41.327523   31394 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19337-3845/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-763049 -n ha-763049
helpers_test.go:261: (dbg) Run:  kubectl --context ha-763049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (335.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-893477
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-893477
E0729 11:13:03.511464   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-893477: exit status 82 (2m1.888287442s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-893477-m03"  ...
	* Stopping node "multinode-893477-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-893477" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-893477 --wait=true -v=8 --alsologtostderr
E0729 11:14:57.915999   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 11:16:06.560351   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-893477 --wait=true -v=8 --alsologtostderr: (3m31.285676679s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-893477
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-893477 -n multinode-893477
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-893477 logs -n 25: (1.592330744s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m02:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3316282178/001/cp-test_multinode-893477-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m02:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477:/home/docker/cp-test_multinode-893477-m02_multinode-893477.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n multinode-893477 sudo cat                                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-893477-m02_multinode-893477.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m02:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03:/home/docker/cp-test_multinode-893477-m02_multinode-893477-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n multinode-893477-m03 sudo cat                                   | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-893477-m02_multinode-893477-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp testdata/cp-test.txt                                                | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m03:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3316282178/001/cp-test_multinode-893477-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m03:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477:/home/docker/cp-test_multinode-893477-m03_multinode-893477.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n multinode-893477 sudo cat                                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-893477-m03_multinode-893477.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m03:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m02:/home/docker/cp-test_multinode-893477-m03_multinode-893477-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n multinode-893477-m02 sudo cat                                   | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-893477-m03_multinode-893477-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-893477 node stop m03                                                          | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	| node    | multinode-893477 node start                                                             | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-893477                                                                | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:11 UTC |                     |
	| stop    | -p multinode-893477                                                                     | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:11 UTC |                     |
	| start   | -p multinode-893477                                                                     | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:13 UTC | 29 Jul 24 11:17 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-893477                                                                | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:17 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:13:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:13:30.736169   40779 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:13:30.736282   40779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:13:30.736289   40779 out.go:304] Setting ErrFile to fd 2...
	I0729 11:13:30.736293   40779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:13:30.736484   40779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:13:30.737014   40779 out.go:298] Setting JSON to false
	I0729 11:13:30.737828   40779 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3357,"bootTime":1722248254,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:13:30.737891   40779 start.go:139] virtualization: kvm guest
	I0729 11:13:30.740874   40779 out.go:177] * [multinode-893477] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:13:30.742280   40779 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:13:30.742282   40779 notify.go:220] Checking for updates...
	I0729 11:13:30.745487   40779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:13:30.746878   40779 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:13:30.747910   40779 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:13:30.749206   40779 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:13:30.750978   40779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:13:30.752932   40779 config.go:182] Loaded profile config "multinode-893477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:13:30.753022   40779 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:13:30.753461   40779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:13:30.753528   40779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:13:30.768630   40779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40003
	I0729 11:13:30.769085   40779 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:13:30.769704   40779 main.go:141] libmachine: Using API Version  1
	I0729 11:13:30.769730   40779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:13:30.770012   40779 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:13:30.770174   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:13:30.805799   40779 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:13:30.806929   40779 start.go:297] selected driver: kvm2
	I0729 11:13:30.806944   40779 start.go:901] validating driver "kvm2" against &{Name:multinode-893477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-893477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:13:30.807110   40779 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:13:30.807415   40779 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:13:30.807482   40779 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:13:30.822192   40779 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:13:30.822955   40779 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:13:30.822994   40779 cni.go:84] Creating CNI manager for ""
	I0729 11:13:30.823003   40779 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 11:13:30.823063   40779 start.go:340] cluster config:
	{Name:multinode-893477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-893477 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:13:30.823197   40779 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:13:30.824936   40779 out.go:177] * Starting "multinode-893477" primary control-plane node in "multinode-893477" cluster
	I0729 11:13:30.826241   40779 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:13:30.826270   40779 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 11:13:30.826278   40779 cache.go:56] Caching tarball of preloaded images
	I0729 11:13:30.826343   40779 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:13:30.826352   40779 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:13:30.826505   40779 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/config.json ...
	I0729 11:13:30.826716   40779 start.go:360] acquireMachinesLock for multinode-893477: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:13:30.826766   40779 start.go:364] duration metric: took 26.968µs to acquireMachinesLock for "multinode-893477"
	I0729 11:13:30.826790   40779 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:13:30.826799   40779 fix.go:54] fixHost starting: 
	I0729 11:13:30.827050   40779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:13:30.827089   40779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:13:30.842096   40779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45087
	I0729 11:13:30.842505   40779 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:13:30.842975   40779 main.go:141] libmachine: Using API Version  1
	I0729 11:13:30.842989   40779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:13:30.843228   40779 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:13:30.843402   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:13:30.843566   40779 main.go:141] libmachine: (multinode-893477) Calling .GetState
	I0729 11:13:30.844967   40779 fix.go:112] recreateIfNeeded on multinode-893477: state=Running err=<nil>
	W0729 11:13:30.844987   40779 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:13:30.847047   40779 out.go:177] * Updating the running kvm2 "multinode-893477" VM ...
	I0729 11:13:30.848447   40779 machine.go:94] provisionDockerMachine start ...
	I0729 11:13:30.848468   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:13:30.848697   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:30.850873   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:30.851323   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:30.851352   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:30.851519   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:13:30.851658   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:30.851799   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:30.851951   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:13:30.852108   40779 main.go:141] libmachine: Using SSH client type: native
	I0729 11:13:30.852298   40779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0729 11:13:30.852309   40779 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:13:30.964097   40779 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-893477
	
	I0729 11:13:30.964123   40779 main.go:141] libmachine: (multinode-893477) Calling .GetMachineName
	I0729 11:13:30.964335   40779 buildroot.go:166] provisioning hostname "multinode-893477"
	I0729 11:13:30.964356   40779 main.go:141] libmachine: (multinode-893477) Calling .GetMachineName
	I0729 11:13:30.964511   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:30.967153   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:30.967528   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:30.967564   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:30.967684   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:13:30.967902   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:30.968038   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:30.968205   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:13:30.968366   40779 main.go:141] libmachine: Using SSH client type: native
	I0729 11:13:30.968532   40779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0729 11:13:30.968543   40779 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-893477 && echo "multinode-893477" | sudo tee /etc/hostname
	I0729 11:13:31.090555   40779 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-893477
	
	I0729 11:13:31.090577   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:31.093260   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.093550   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:31.093592   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.093771   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:13:31.093937   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:31.094115   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:31.094247   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:13:31.094421   40779 main.go:141] libmachine: Using SSH client type: native
	I0729 11:13:31.094590   40779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0729 11:13:31.094607   40779 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-893477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-893477/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-893477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:13:31.203820   40779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:13:31.203863   40779 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:13:31.203910   40779 buildroot.go:174] setting up certificates
	I0729 11:13:31.203924   40779 provision.go:84] configureAuth start
	I0729 11:13:31.203940   40779 main.go:141] libmachine: (multinode-893477) Calling .GetMachineName
	I0729 11:13:31.204263   40779 main.go:141] libmachine: (multinode-893477) Calling .GetIP
	I0729 11:13:31.206844   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.207289   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:31.207319   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.207434   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:31.209764   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.210066   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:31.210100   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.210238   40779 provision.go:143] copyHostCerts
	I0729 11:13:31.210266   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:13:31.210303   40779 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:13:31.210310   40779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:13:31.210375   40779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:13:31.210455   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:13:31.210471   40779 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:13:31.210477   40779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:13:31.210501   40779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:13:31.210553   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:13:31.210569   40779 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:13:31.210574   40779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:13:31.210594   40779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:13:31.210649   40779 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.multinode-893477 san=[127.0.0.1 192.168.39.159 localhost minikube multinode-893477]
	I0729 11:13:31.405495   40779 provision.go:177] copyRemoteCerts
	I0729 11:13:31.405549   40779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:13:31.405571   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:31.408459   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.408814   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:31.408845   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.409043   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:13:31.409224   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:31.409365   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:13:31.409489   40779 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/multinode-893477/id_rsa Username:docker}
	I0729 11:13:31.493298   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 11:13:31.493356   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:13:31.519881   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 11:13:31.519939   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 11:13:31.544399   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 11:13:31.544463   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:13:31.569440   40779 provision.go:87] duration metric: took 365.502683ms to configureAuth
	I0729 11:13:31.569470   40779 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:13:31.569738   40779 config.go:182] Loaded profile config "multinode-893477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:13:31.569816   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:31.572563   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.573010   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:31.573032   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.573233   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:13:31.573414   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:31.573579   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:31.573744   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:13:31.573954   40779 main.go:141] libmachine: Using SSH client type: native
	I0729 11:13:31.574151   40779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0729 11:13:31.574166   40779 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:15:02.417197   40779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:15:02.417221   40779 machine.go:97] duration metric: took 1m31.568761424s to provisionDockerMachine
	I0729 11:15:02.417234   40779 start.go:293] postStartSetup for "multinode-893477" (driver="kvm2")
	I0729 11:15:02.417247   40779 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:15:02.417295   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:15:02.417680   40779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:15:02.417722   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:15:02.420770   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.421200   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:02.421229   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.421369   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:15:02.421544   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:15:02.421745   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:15:02.422025   40779 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/multinode-893477/id_rsa Username:docker}
	I0729 11:15:02.510830   40779 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:15:02.515143   40779 command_runner.go:130] > NAME=Buildroot
	I0729 11:15:02.515160   40779 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 11:15:02.515165   40779 command_runner.go:130] > ID=buildroot
	I0729 11:15:02.515170   40779 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 11:15:02.515175   40779 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 11:15:02.515232   40779 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:15:02.515247   40779 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:15:02.515306   40779 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:15:02.515373   40779 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:15:02.515384   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /etc/ssl/certs/110642.pem
	I0729 11:15:02.515497   40779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:15:02.525877   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:15:02.550244   40779 start.go:296] duration metric: took 132.997653ms for postStartSetup
	I0729 11:15:02.550313   40779 fix.go:56] duration metric: took 1m31.723512391s for fixHost
	I0729 11:15:02.550343   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:15:02.553362   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.553804   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:02.553839   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.553958   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:15:02.554196   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:15:02.554352   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:15:02.554489   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:15:02.554810   40779 main.go:141] libmachine: Using SSH client type: native
	I0729 11:15:02.554992   40779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0729 11:15:02.555004   40779 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:15:02.663819   40779 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722251702.641533815
	
	I0729 11:15:02.663844   40779 fix.go:216] guest clock: 1722251702.641533815
	I0729 11:15:02.663854   40779 fix.go:229] Guest: 2024-07-29 11:15:02.641533815 +0000 UTC Remote: 2024-07-29 11:15:02.550319148 +0000 UTC m=+91.848445825 (delta=91.214667ms)
	I0729 11:15:02.663924   40779 fix.go:200] guest clock delta is within tolerance: 91.214667ms
	I0729 11:15:02.663933   40779 start.go:83] releasing machines lock for "multinode-893477", held for 1m31.837153851s
	I0729 11:15:02.663973   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:15:02.664292   40779 main.go:141] libmachine: (multinode-893477) Calling .GetIP
	I0729 11:15:02.667004   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.667376   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:02.667406   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.667604   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:15:02.668130   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:15:02.668306   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:15:02.668416   40779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:15:02.668453   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:15:02.668558   40779 ssh_runner.go:195] Run: cat /version.json
	I0729 11:15:02.668577   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:15:02.671120   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.671152   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.671553   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:02.671575   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.671611   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:02.671630   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.671686   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:15:02.671805   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:15:02.671878   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:15:02.671988   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:15:02.672093   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:15:02.672155   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:15:02.672221   40779 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/multinode-893477/id_rsa Username:docker}
	I0729 11:15:02.672284   40779 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/multinode-893477/id_rsa Username:docker}
	I0729 11:15:02.772774   40779 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 11:15:02.773554   40779 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 11:15:02.773740   40779 ssh_runner.go:195] Run: systemctl --version
	I0729 11:15:02.779982   40779 command_runner.go:130] > systemd 252 (252)
	I0729 11:15:02.780025   40779 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 11:15:02.780248   40779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:15:02.957139   40779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 11:15:02.966206   40779 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 11:15:02.966257   40779 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:15:02.966312   40779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:15:02.978274   40779 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 11:15:02.978303   40779 start.go:495] detecting cgroup driver to use...
	I0729 11:15:02.978382   40779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:15:03.002244   40779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:15:03.018694   40779 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:15:03.018769   40779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:15:03.037073   40779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:15:03.054059   40779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:15:03.222532   40779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:15:03.388931   40779 docker.go:233] disabling docker service ...
	I0729 11:15:03.388992   40779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:15:03.424097   40779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:15:03.444689   40779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:15:03.626482   40779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:15:03.807927   40779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:15:03.822958   40779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:15:03.844914   40779 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 11:15:03.845056   40779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:15:03.845126   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.857188   40779 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:15:03.857259   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.869232   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.880979   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.892730   40779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:15:03.904426   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.915527   40779 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.927146   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.938527   40779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:15:03.952133   40779 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 11:15:03.952390   40779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:15:03.962895   40779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:15:04.114680   40779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:15:14.030281   40779 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.915537854s)
	I0729 11:15:14.030312   40779 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:15:14.030353   40779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:15:14.035865   40779 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 11:15:14.035887   40779 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 11:15:14.035895   40779 command_runner.go:130] > Device: 0,22	Inode: 1418        Links: 1
	I0729 11:15:14.035901   40779 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 11:15:14.035906   40779 command_runner.go:130] > Access: 2024-07-29 11:15:13.851100415 +0000
	I0729 11:15:14.035912   40779 command_runner.go:130] > Modify: 2024-07-29 11:15:13.851100415 +0000
	I0729 11:15:14.035917   40779 command_runner.go:130] > Change: 2024-07-29 11:15:13.851100415 +0000
	I0729 11:15:14.035921   40779 command_runner.go:130] >  Birth: -
	I0729 11:15:14.036136   40779 start.go:563] Will wait 60s for crictl version
	I0729 11:15:14.036210   40779 ssh_runner.go:195] Run: which crictl
	I0729 11:15:14.041124   40779 command_runner.go:130] > /usr/bin/crictl
	I0729 11:15:14.041341   40779 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:15:14.090159   40779 command_runner.go:130] > Version:  0.1.0
	I0729 11:15:14.090183   40779 command_runner.go:130] > RuntimeName:  cri-o
	I0729 11:15:14.090220   40779 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 11:15:14.090258   40779 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 11:15:14.091815   40779 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:15:14.091881   40779 ssh_runner.go:195] Run: crio --version
	I0729 11:15:14.129483   40779 command_runner.go:130] > crio version 1.29.1
	I0729 11:15:14.129509   40779 command_runner.go:130] > Version:        1.29.1
	I0729 11:15:14.129517   40779 command_runner.go:130] > GitCommit:      unknown
	I0729 11:15:14.129522   40779 command_runner.go:130] > GitCommitDate:  unknown
	I0729 11:15:14.129526   40779 command_runner.go:130] > GitTreeState:   clean
	I0729 11:15:14.129534   40779 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 11:15:14.129538   40779 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 11:15:14.129542   40779 command_runner.go:130] > Compiler:       gc
	I0729 11:15:14.129547   40779 command_runner.go:130] > Platform:       linux/amd64
	I0729 11:15:14.129550   40779 command_runner.go:130] > Linkmode:       dynamic
	I0729 11:15:14.129556   40779 command_runner.go:130] > BuildTags:      
	I0729 11:15:14.129560   40779 command_runner.go:130] >   containers_image_ostree_stub
	I0729 11:15:14.129564   40779 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 11:15:14.129567   40779 command_runner.go:130] >   btrfs_noversion
	I0729 11:15:14.129571   40779 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 11:15:14.129574   40779 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 11:15:14.129578   40779 command_runner.go:130] >   seccomp
	I0729 11:15:14.129584   40779 command_runner.go:130] > LDFlags:          unknown
	I0729 11:15:14.129589   40779 command_runner.go:130] > SeccompEnabled:   true
	I0729 11:15:14.129599   40779 command_runner.go:130] > AppArmorEnabled:  false
	I0729 11:15:14.129712   40779 ssh_runner.go:195] Run: crio --version
	I0729 11:15:14.164380   40779 command_runner.go:130] > crio version 1.29.1
	I0729 11:15:14.164407   40779 command_runner.go:130] > Version:        1.29.1
	I0729 11:15:14.164416   40779 command_runner.go:130] > GitCommit:      unknown
	I0729 11:15:14.164423   40779 command_runner.go:130] > GitCommitDate:  unknown
	I0729 11:15:14.164429   40779 command_runner.go:130] > GitTreeState:   clean
	I0729 11:15:14.164441   40779 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 11:15:14.164447   40779 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 11:15:14.164453   40779 command_runner.go:130] > Compiler:       gc
	I0729 11:15:14.164460   40779 command_runner.go:130] > Platform:       linux/amd64
	I0729 11:15:14.164468   40779 command_runner.go:130] > Linkmode:       dynamic
	I0729 11:15:14.164476   40779 command_runner.go:130] > BuildTags:      
	I0729 11:15:14.164483   40779 command_runner.go:130] >   containers_image_ostree_stub
	I0729 11:15:14.164518   40779 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 11:15:14.164532   40779 command_runner.go:130] >   btrfs_noversion
	I0729 11:15:14.164541   40779 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 11:15:14.164549   40779 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 11:15:14.164556   40779 command_runner.go:130] >   seccomp
	I0729 11:15:14.164563   40779 command_runner.go:130] > LDFlags:          unknown
	I0729 11:15:14.164572   40779 command_runner.go:130] > SeccompEnabled:   true
	I0729 11:15:14.164580   40779 command_runner.go:130] > AppArmorEnabled:  false
	I0729 11:15:14.166476   40779 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:15:14.168014   40779 main.go:141] libmachine: (multinode-893477) Calling .GetIP
	I0729 11:15:14.170837   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:14.171363   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:14.171390   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:14.171657   40779 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:15:14.176455   40779 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 11:15:14.176572   40779 kubeadm.go:883] updating cluster {Name:multinode-893477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-893477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:15:14.176706   40779 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:15:14.176743   40779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:15:14.225483   40779 command_runner.go:130] > {
	I0729 11:15:14.225509   40779 command_runner.go:130] >   "images": [
	I0729 11:15:14.225513   40779 command_runner.go:130] >     {
	I0729 11:15:14.225521   40779 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 11:15:14.225526   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225531   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 11:15:14.225535   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225539   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225547   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 11:15:14.225554   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 11:15:14.225564   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225570   40779 command_runner.go:130] >       "size": "87165492",
	I0729 11:15:14.225574   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.225578   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.225583   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225591   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225594   40779 command_runner.go:130] >     },
	I0729 11:15:14.225597   40779 command_runner.go:130] >     {
	I0729 11:15:14.225603   40779 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 11:15:14.225607   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225614   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 11:15:14.225618   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225622   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225628   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 11:15:14.225636   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 11:15:14.225639   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225646   40779 command_runner.go:130] >       "size": "87174707",
	I0729 11:15:14.225649   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.225657   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.225663   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225667   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225670   40779 command_runner.go:130] >     },
	I0729 11:15:14.225673   40779 command_runner.go:130] >     {
	I0729 11:15:14.225679   40779 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 11:15:14.225685   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225690   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 11:15:14.225694   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225698   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225704   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 11:15:14.225713   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 11:15:14.225717   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225724   40779 command_runner.go:130] >       "size": "1363676",
	I0729 11:15:14.225727   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.225734   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.225737   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225741   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225745   40779 command_runner.go:130] >     },
	I0729 11:15:14.225750   40779 command_runner.go:130] >     {
	I0729 11:15:14.225756   40779 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 11:15:14.225762   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225767   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 11:15:14.225770   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225779   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225789   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 11:15:14.225801   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 11:15:14.225805   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225810   40779 command_runner.go:130] >       "size": "31470524",
	I0729 11:15:14.225814   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.225818   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.225822   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225826   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225829   40779 command_runner.go:130] >     },
	I0729 11:15:14.225833   40779 command_runner.go:130] >     {
	I0729 11:15:14.225838   40779 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 11:15:14.225843   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225848   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 11:15:14.225854   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225858   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225865   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 11:15:14.225875   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 11:15:14.225880   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225884   40779 command_runner.go:130] >       "size": "61245718",
	I0729 11:15:14.225891   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.225894   40779 command_runner.go:130] >       "username": "nonroot",
	I0729 11:15:14.225898   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225901   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225905   40779 command_runner.go:130] >     },
	I0729 11:15:14.225908   40779 command_runner.go:130] >     {
	I0729 11:15:14.225914   40779 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 11:15:14.225920   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225924   40779 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 11:15:14.225930   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225934   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225941   40779 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 11:15:14.225949   40779 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 11:15:14.225955   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225960   40779 command_runner.go:130] >       "size": "150779692",
	I0729 11:15:14.225965   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.225970   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.225975   40779 command_runner.go:130] >       },
	I0729 11:15:14.225979   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.225987   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225993   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225996   40779 command_runner.go:130] >     },
	I0729 11:15:14.226002   40779 command_runner.go:130] >     {
	I0729 11:15:14.226008   40779 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 11:15:14.226014   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.226019   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 11:15:14.226024   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226028   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.226035   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 11:15:14.226044   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 11:15:14.226047   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226057   40779 command_runner.go:130] >       "size": "117609954",
	I0729 11:15:14.226063   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.226067   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.226073   40779 command_runner.go:130] >       },
	I0729 11:15:14.226077   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.226083   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.226087   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.226092   40779 command_runner.go:130] >     },
	I0729 11:15:14.226096   40779 command_runner.go:130] >     {
	I0729 11:15:14.226103   40779 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 11:15:14.226108   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.226116   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 11:15:14.226122   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226126   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.226142   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 11:15:14.226153   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 11:15:14.226157   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226161   40779 command_runner.go:130] >       "size": "112198984",
	I0729 11:15:14.226164   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.226168   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.226171   40779 command_runner.go:130] >       },
	I0729 11:15:14.226175   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.226178   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.226182   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.226185   40779 command_runner.go:130] >     },
	I0729 11:15:14.226188   40779 command_runner.go:130] >     {
	I0729 11:15:14.226194   40779 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 11:15:14.226197   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.226202   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 11:15:14.226205   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226209   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.226220   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 11:15:14.226238   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 11:15:14.226243   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226247   40779 command_runner.go:130] >       "size": "85953945",
	I0729 11:15:14.226251   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.226255   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.226259   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.226262   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.226269   40779 command_runner.go:130] >     },
	I0729 11:15:14.226272   40779 command_runner.go:130] >     {
	I0729 11:15:14.226278   40779 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 11:15:14.226284   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.226289   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 11:15:14.226294   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226299   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.226308   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 11:15:14.226317   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 11:15:14.226322   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226327   40779 command_runner.go:130] >       "size": "63051080",
	I0729 11:15:14.226332   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.226337   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.226343   40779 command_runner.go:130] >       },
	I0729 11:15:14.226346   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.226352   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.226356   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.226362   40779 command_runner.go:130] >     },
	I0729 11:15:14.226366   40779 command_runner.go:130] >     {
	I0729 11:15:14.226372   40779 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 11:15:14.226376   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.226382   40779 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 11:15:14.226386   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226392   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.226398   40779 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 11:15:14.226407   40779 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 11:15:14.226412   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226416   40779 command_runner.go:130] >       "size": "750414",
	I0729 11:15:14.226422   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.226426   40779 command_runner.go:130] >         "value": "65535"
	I0729 11:15:14.226431   40779 command_runner.go:130] >       },
	I0729 11:15:14.226435   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.226441   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.226445   40779 command_runner.go:130] >       "pinned": true
	I0729 11:15:14.226451   40779 command_runner.go:130] >     }
	I0729 11:15:14.226454   40779 command_runner.go:130] >   ]
	I0729 11:15:14.226459   40779 command_runner.go:130] > }
	I0729 11:15:14.226613   40779 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:15:14.226624   40779 crio.go:433] Images already preloaded, skipping extraction
	I0729 11:15:14.226679   40779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:15:14.262684   40779 command_runner.go:130] > {
	I0729 11:15:14.262715   40779 command_runner.go:130] >   "images": [
	I0729 11:15:14.262720   40779 command_runner.go:130] >     {
	I0729 11:15:14.262727   40779 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 11:15:14.262732   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.262737   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 11:15:14.262741   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262746   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.262759   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 11:15:14.262766   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 11:15:14.262772   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262776   40779 command_runner.go:130] >       "size": "87165492",
	I0729 11:15:14.262783   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.262787   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.262795   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.262799   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.262802   40779 command_runner.go:130] >     },
	I0729 11:15:14.262807   40779 command_runner.go:130] >     {
	I0729 11:15:14.262815   40779 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 11:15:14.262821   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.262828   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 11:15:14.262836   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262841   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.262853   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 11:15:14.262867   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 11:15:14.262874   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262880   40779 command_runner.go:130] >       "size": "87174707",
	I0729 11:15:14.262886   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.262895   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.262901   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.262908   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.262914   40779 command_runner.go:130] >     },
	I0729 11:15:14.262920   40779 command_runner.go:130] >     {
	I0729 11:15:14.262925   40779 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 11:15:14.262929   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.262940   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 11:15:14.262945   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262954   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.262966   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 11:15:14.262980   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 11:15:14.262985   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262993   40779 command_runner.go:130] >       "size": "1363676",
	I0729 11:15:14.262997   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.263014   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263027   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263031   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263035   40779 command_runner.go:130] >     },
	I0729 11:15:14.263039   40779 command_runner.go:130] >     {
	I0729 11:15:14.263046   40779 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 11:15:14.263050   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263059   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 11:15:14.263065   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263071   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263084   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 11:15:14.263103   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 11:15:14.263110   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263116   40779 command_runner.go:130] >       "size": "31470524",
	I0729 11:15:14.263123   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.263129   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263133   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263138   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263141   40779 command_runner.go:130] >     },
	I0729 11:15:14.263144   40779 command_runner.go:130] >     {
	I0729 11:15:14.263151   40779 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 11:15:14.263161   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263170   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 11:15:14.263179   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263188   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263202   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 11:15:14.263216   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 11:15:14.263225   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263231   40779 command_runner.go:130] >       "size": "61245718",
	I0729 11:15:14.263235   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.263244   40779 command_runner.go:130] >       "username": "nonroot",
	I0729 11:15:14.263254   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263261   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263269   40779 command_runner.go:130] >     },
	I0729 11:15:14.263276   40779 command_runner.go:130] >     {
	I0729 11:15:14.263289   40779 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 11:15:14.263299   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263309   40779 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 11:15:14.263315   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263319   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263332   40779 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 11:15:14.263347   40779 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 11:15:14.263356   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263367   40779 command_runner.go:130] >       "size": "150779692",
	I0729 11:15:14.263375   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.263385   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.263396   40779 command_runner.go:130] >       },
	I0729 11:15:14.263405   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263411   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263417   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263425   40779 command_runner.go:130] >     },
	I0729 11:15:14.263430   40779 command_runner.go:130] >     {
	I0729 11:15:14.263443   40779 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 11:15:14.263451   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263462   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 11:15:14.263471   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263480   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263492   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 11:15:14.263512   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 11:15:14.263521   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263531   40779 command_runner.go:130] >       "size": "117609954",
	I0729 11:15:14.263540   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.263549   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.263557   40779 command_runner.go:130] >       },
	I0729 11:15:14.263566   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263575   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263578   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263582   40779 command_runner.go:130] >     },
	I0729 11:15:14.263588   40779 command_runner.go:130] >     {
	I0729 11:15:14.263598   40779 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 11:15:14.263608   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263619   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 11:15:14.263629   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263638   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263663   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 11:15:14.263677   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 11:15:14.263682   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263689   40779 command_runner.go:130] >       "size": "112198984",
	I0729 11:15:14.263699   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.263706   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.263715   40779 command_runner.go:130] >       },
	I0729 11:15:14.263721   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263730   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263739   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263746   40779 command_runner.go:130] >     },
	I0729 11:15:14.263751   40779 command_runner.go:130] >     {
	I0729 11:15:14.263759   40779 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 11:15:14.263767   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263778   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 11:15:14.263786   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263792   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263805   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 11:15:14.263822   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 11:15:14.263830   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263836   40779 command_runner.go:130] >       "size": "85953945",
	I0729 11:15:14.263843   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.263847   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263863   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263869   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263878   40779 command_runner.go:130] >     },
	I0729 11:15:14.263885   40779 command_runner.go:130] >     {
	I0729 11:15:14.263895   40779 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 11:15:14.263905   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263916   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 11:15:14.263924   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263931   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263941   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 11:15:14.263957   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 11:15:14.263967   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263975   40779 command_runner.go:130] >       "size": "63051080",
	I0729 11:15:14.263983   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.263992   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.264000   40779 command_runner.go:130] >       },
	I0729 11:15:14.264009   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.264015   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.264020   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.264027   40779 command_runner.go:130] >     },
	I0729 11:15:14.264034   40779 command_runner.go:130] >     {
	I0729 11:15:14.264047   40779 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 11:15:14.264065   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.264076   40779 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 11:15:14.264081   40779 command_runner.go:130] >       ],
	I0729 11:15:14.264088   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.264099   40779 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 11:15:14.264110   40779 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 11:15:14.264118   40779 command_runner.go:130] >       ],
	I0729 11:15:14.264131   40779 command_runner.go:130] >       "size": "750414",
	I0729 11:15:14.264140   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.264149   40779 command_runner.go:130] >         "value": "65535"
	I0729 11:15:14.264158   40779 command_runner.go:130] >       },
	I0729 11:15:14.264166   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.264175   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.264182   40779 command_runner.go:130] >       "pinned": true
	I0729 11:15:14.264186   40779 command_runner.go:130] >     }
	I0729 11:15:14.264191   40779 command_runner.go:130] >   ]
	I0729 11:15:14.264199   40779 command_runner.go:130] > }
	I0729 11:15:14.264358   40779 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:15:14.264370   40779 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:15:14.264387   40779 kubeadm.go:934] updating node { 192.168.39.159 8443 v1.30.3 crio true true} ...
	I0729 11:15:14.264548   40779 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-893477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-893477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:15:14.264633   40779 ssh_runner.go:195] Run: crio config
	I0729 11:15:14.298856   40779 command_runner.go:130] ! time="2024-07-29 11:15:14.276448668Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 11:15:14.304984   40779 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 11:15:14.312361   40779 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 11:15:14.312385   40779 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 11:15:14.312407   40779 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 11:15:14.312411   40779 command_runner.go:130] > #
	I0729 11:15:14.312418   40779 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 11:15:14.312424   40779 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 11:15:14.312430   40779 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 11:15:14.312446   40779 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 11:15:14.312450   40779 command_runner.go:130] > # reload'.
	I0729 11:15:14.312456   40779 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 11:15:14.312464   40779 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 11:15:14.312471   40779 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 11:15:14.312479   40779 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 11:15:14.312482   40779 command_runner.go:130] > [crio]
	I0729 11:15:14.312491   40779 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 11:15:14.312496   40779 command_runner.go:130] > # containers images, in this directory.
	I0729 11:15:14.312502   40779 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 11:15:14.312512   40779 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 11:15:14.312519   40779 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 11:15:14.312526   40779 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 11:15:14.312532   40779 command_runner.go:130] > # imagestore = ""
	I0729 11:15:14.312538   40779 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 11:15:14.312545   40779 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 11:15:14.312550   40779 command_runner.go:130] > storage_driver = "overlay"
	I0729 11:15:14.312557   40779 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 11:15:14.312563   40779 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 11:15:14.312571   40779 command_runner.go:130] > storage_option = [
	I0729 11:15:14.312576   40779 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 11:15:14.312579   40779 command_runner.go:130] > ]
	I0729 11:15:14.312587   40779 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 11:15:14.312593   40779 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 11:15:14.312599   40779 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 11:15:14.312604   40779 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 11:15:14.312612   40779 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 11:15:14.312619   40779 command_runner.go:130] > # always happen on a node reboot
	I0729 11:15:14.312623   40779 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 11:15:14.312634   40779 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 11:15:14.312642   40779 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 11:15:14.312653   40779 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 11:15:14.312660   40779 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 11:15:14.312667   40779 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 11:15:14.312677   40779 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 11:15:14.312683   40779 command_runner.go:130] > # internal_wipe = true
	I0729 11:15:14.312691   40779 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 11:15:14.312698   40779 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 11:15:14.312703   40779 command_runner.go:130] > # internal_repair = false
	I0729 11:15:14.312710   40779 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 11:15:14.312715   40779 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 11:15:14.312722   40779 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 11:15:14.312727   40779 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 11:15:14.312736   40779 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 11:15:14.312742   40779 command_runner.go:130] > [crio.api]
	I0729 11:15:14.312747   40779 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 11:15:14.312751   40779 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 11:15:14.312758   40779 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 11:15:14.312763   40779 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 11:15:14.312771   40779 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 11:15:14.312778   40779 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 11:15:14.312784   40779 command_runner.go:130] > # stream_port = "0"
	I0729 11:15:14.312789   40779 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 11:15:14.312794   40779 command_runner.go:130] > # stream_enable_tls = false
	I0729 11:15:14.312800   40779 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 11:15:14.312806   40779 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 11:15:14.312816   40779 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 11:15:14.312827   40779 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 11:15:14.312831   40779 command_runner.go:130] > # minutes.
	I0729 11:15:14.312835   40779 command_runner.go:130] > # stream_tls_cert = ""
	I0729 11:15:14.312840   40779 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 11:15:14.312848   40779 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 11:15:14.312852   40779 command_runner.go:130] > # stream_tls_key = ""
	I0729 11:15:14.312858   40779 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 11:15:14.312866   40779 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 11:15:14.312885   40779 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 11:15:14.312893   40779 command_runner.go:130] > # stream_tls_ca = ""
	I0729 11:15:14.312906   40779 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 11:15:14.312913   40779 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 11:15:14.312924   40779 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 11:15:14.312931   40779 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 11:15:14.312937   40779 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 11:15:14.312944   40779 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 11:15:14.312948   40779 command_runner.go:130] > [crio.runtime]
	I0729 11:15:14.312955   40779 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 11:15:14.312961   40779 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 11:15:14.312966   40779 command_runner.go:130] > # "nofile=1024:2048"
	I0729 11:15:14.312972   40779 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 11:15:14.312978   40779 command_runner.go:130] > # default_ulimits = [
	I0729 11:15:14.312981   40779 command_runner.go:130] > # ]
	I0729 11:15:14.312988   40779 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 11:15:14.312993   40779 command_runner.go:130] > # no_pivot = false
	I0729 11:15:14.312999   40779 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 11:15:14.313006   40779 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 11:15:14.313013   40779 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 11:15:14.313019   40779 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 11:15:14.313026   40779 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 11:15:14.313032   40779 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 11:15:14.313038   40779 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 11:15:14.313042   40779 command_runner.go:130] > # Cgroup setting for conmon
	I0729 11:15:14.313051   40779 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 11:15:14.313057   40779 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 11:15:14.313069   40779 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 11:15:14.313076   40779 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 11:15:14.313085   40779 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 11:15:14.313090   40779 command_runner.go:130] > conmon_env = [
	I0729 11:15:14.313095   40779 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 11:15:14.313101   40779 command_runner.go:130] > ]
	I0729 11:15:14.313106   40779 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 11:15:14.313111   40779 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 11:15:14.313119   40779 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 11:15:14.313122   40779 command_runner.go:130] > # default_env = [
	I0729 11:15:14.313128   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313137   40779 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 11:15:14.313146   40779 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 11:15:14.313152   40779 command_runner.go:130] > # selinux = false
	I0729 11:15:14.313158   40779 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 11:15:14.313166   40779 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 11:15:14.313171   40779 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 11:15:14.313176   40779 command_runner.go:130] > # seccomp_profile = ""
	I0729 11:15:14.313182   40779 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 11:15:14.313189   40779 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 11:15:14.313194   40779 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 11:15:14.313200   40779 command_runner.go:130] > # which might increase security.
	I0729 11:15:14.313205   40779 command_runner.go:130] > # This option is currently deprecated,
	I0729 11:15:14.313213   40779 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 11:15:14.313217   40779 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 11:15:14.313223   40779 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 11:15:14.313230   40779 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 11:15:14.313236   40779 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 11:15:14.313244   40779 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 11:15:14.313249   40779 command_runner.go:130] > # This option supports live configuration reload.
	I0729 11:15:14.313256   40779 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 11:15:14.313261   40779 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 11:15:14.313268   40779 command_runner.go:130] > # the cgroup blockio controller.
	I0729 11:15:14.313272   40779 command_runner.go:130] > # blockio_config_file = ""
	I0729 11:15:14.313280   40779 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 11:15:14.313286   40779 command_runner.go:130] > # blockio parameters.
	I0729 11:15:14.313290   40779 command_runner.go:130] > # blockio_reload = false
	I0729 11:15:14.313298   40779 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 11:15:14.313302   40779 command_runner.go:130] > # irqbalance daemon.
	I0729 11:15:14.313307   40779 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 11:15:14.313317   40779 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 11:15:14.313326   40779 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 11:15:14.313334   40779 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 11:15:14.313341   40779 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 11:15:14.313347   40779 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 11:15:14.313354   40779 command_runner.go:130] > # This option supports live configuration reload.
	I0729 11:15:14.313358   40779 command_runner.go:130] > # rdt_config_file = ""
	I0729 11:15:14.313367   40779 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 11:15:14.313374   40779 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 11:15:14.313402   40779 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 11:15:14.313409   40779 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 11:15:14.313415   40779 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 11:15:14.313421   40779 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 11:15:14.313427   40779 command_runner.go:130] > # will be added.
	I0729 11:15:14.313431   40779 command_runner.go:130] > # default_capabilities = [
	I0729 11:15:14.313437   40779 command_runner.go:130] > # 	"CHOWN",
	I0729 11:15:14.313441   40779 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 11:15:14.313446   40779 command_runner.go:130] > # 	"FSETID",
	I0729 11:15:14.313450   40779 command_runner.go:130] > # 	"FOWNER",
	I0729 11:15:14.313456   40779 command_runner.go:130] > # 	"SETGID",
	I0729 11:15:14.313459   40779 command_runner.go:130] > # 	"SETUID",
	I0729 11:15:14.313465   40779 command_runner.go:130] > # 	"SETPCAP",
	I0729 11:15:14.313470   40779 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 11:15:14.313475   40779 command_runner.go:130] > # 	"KILL",
	I0729 11:15:14.313478   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313487   40779 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 11:15:14.313495   40779 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 11:15:14.313502   40779 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 11:15:14.313508   40779 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 11:15:14.313516   40779 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 11:15:14.313520   40779 command_runner.go:130] > default_sysctls = [
	I0729 11:15:14.313527   40779 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 11:15:14.313531   40779 command_runner.go:130] > ]
	I0729 11:15:14.313538   40779 command_runner.go:130] > # List of devices on the host that a
	I0729 11:15:14.313546   40779 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 11:15:14.313552   40779 command_runner.go:130] > # allowed_devices = [
	I0729 11:15:14.313555   40779 command_runner.go:130] > # 	"/dev/fuse",
	I0729 11:15:14.313561   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313566   40779 command_runner.go:130] > # List of additional devices. specified as
	I0729 11:15:14.313574   40779 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 11:15:14.313581   40779 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 11:15:14.313589   40779 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 11:15:14.313595   40779 command_runner.go:130] > # additional_devices = [
	I0729 11:15:14.313602   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313609   40779 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 11:15:14.313613   40779 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 11:15:14.313618   40779 command_runner.go:130] > # 	"/etc/cdi",
	I0729 11:15:14.313622   40779 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 11:15:14.313627   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313633   40779 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 11:15:14.313641   40779 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 11:15:14.313647   40779 command_runner.go:130] > # Defaults to false.
	I0729 11:15:14.313652   40779 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 11:15:14.313659   40779 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 11:15:14.313667   40779 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 11:15:14.313671   40779 command_runner.go:130] > # hooks_dir = [
	I0729 11:15:14.313677   40779 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 11:15:14.313681   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313689   40779 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 11:15:14.313697   40779 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 11:15:14.313702   40779 command_runner.go:130] > # its default mounts from the following two files:
	I0729 11:15:14.313707   40779 command_runner.go:130] > #
	I0729 11:15:14.313713   40779 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 11:15:14.313721   40779 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 11:15:14.313727   40779 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 11:15:14.313732   40779 command_runner.go:130] > #
	I0729 11:15:14.313738   40779 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 11:15:14.313746   40779 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 11:15:14.313752   40779 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 11:15:14.313758   40779 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 11:15:14.313762   40779 command_runner.go:130] > #
	I0729 11:15:14.313768   40779 command_runner.go:130] > # default_mounts_file = ""
	I0729 11:15:14.313773   40779 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 11:15:14.313781   40779 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 11:15:14.313787   40779 command_runner.go:130] > pids_limit = 1024
	I0729 11:15:14.313793   40779 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 11:15:14.313801   40779 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 11:15:14.313806   40779 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 11:15:14.313818   40779 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 11:15:14.313829   40779 command_runner.go:130] > # log_size_max = -1
	I0729 11:15:14.313838   40779 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 11:15:14.313844   40779 command_runner.go:130] > # log_to_journald = false
	I0729 11:15:14.313851   40779 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 11:15:14.313856   40779 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 11:15:14.313863   40779 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 11:15:14.313868   40779 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 11:15:14.313873   40779 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 11:15:14.313877   40779 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 11:15:14.313882   40779 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 11:15:14.313889   40779 command_runner.go:130] > # read_only = false
	I0729 11:15:14.313894   40779 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 11:15:14.313900   40779 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 11:15:14.313906   40779 command_runner.go:130] > # live configuration reload.
	I0729 11:15:14.313910   40779 command_runner.go:130] > # log_level = "info"
	I0729 11:15:14.313917   40779 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 11:15:14.313924   40779 command_runner.go:130] > # This option supports live configuration reload.
	I0729 11:15:14.313928   40779 command_runner.go:130] > # log_filter = ""
	I0729 11:15:14.313935   40779 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 11:15:14.313943   40779 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 11:15:14.313949   40779 command_runner.go:130] > # separated by comma.
	I0729 11:15:14.313956   40779 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 11:15:14.313961   40779 command_runner.go:130] > # uid_mappings = ""
	I0729 11:15:14.313967   40779 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 11:15:14.313975   40779 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 11:15:14.313981   40779 command_runner.go:130] > # separated by comma.
	I0729 11:15:14.313988   40779 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 11:15:14.313993   40779 command_runner.go:130] > # gid_mappings = ""
	I0729 11:15:14.313999   40779 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 11:15:14.314007   40779 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 11:15:14.314015   40779 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 11:15:14.314024   40779 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 11:15:14.314030   40779 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 11:15:14.314036   40779 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 11:15:14.314043   40779 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 11:15:14.314051   40779 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 11:15:14.314066   40779 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 11:15:14.314075   40779 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 11:15:14.314081   40779 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 11:15:14.314088   40779 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 11:15:14.314095   40779 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 11:15:14.314101   40779 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 11:15:14.314106   40779 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 11:15:14.314111   40779 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 11:15:14.314117   40779 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 11:15:14.314122   40779 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 11:15:14.314128   40779 command_runner.go:130] > drop_infra_ctr = false
	I0729 11:15:14.314135   40779 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 11:15:14.314143   40779 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 11:15:14.314151   40779 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 11:15:14.314157   40779 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 11:15:14.314164   40779 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 11:15:14.314172   40779 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 11:15:14.314177   40779 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 11:15:14.314184   40779 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 11:15:14.314187   40779 command_runner.go:130] > # shared_cpuset = ""
	I0729 11:15:14.314195   40779 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 11:15:14.314200   40779 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 11:15:14.314205   40779 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 11:15:14.314213   40779 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 11:15:14.314220   40779 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 11:15:14.314225   40779 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 11:15:14.314233   40779 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 11:15:14.314237   40779 command_runner.go:130] > # enable_criu_support = false
	I0729 11:15:14.314242   40779 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 11:15:14.314250   40779 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 11:15:14.314257   40779 command_runner.go:130] > # enable_pod_events = false
	I0729 11:15:14.314262   40779 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 11:15:14.314270   40779 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 11:15:14.314276   40779 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 11:15:14.314281   40779 command_runner.go:130] > # default_runtime = "runc"
	I0729 11:15:14.314286   40779 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 11:15:14.314298   40779 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 11:15:14.314308   40779 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 11:15:14.314317   40779 command_runner.go:130] > # creation as a file is not desired either.
	I0729 11:15:14.314326   40779 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 11:15:14.314333   40779 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 11:15:14.314338   40779 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 11:15:14.314341   40779 command_runner.go:130] > # ]
	I0729 11:15:14.314348   40779 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 11:15:14.314356   40779 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 11:15:14.314362   40779 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 11:15:14.314369   40779 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 11:15:14.314371   40779 command_runner.go:130] > #
	I0729 11:15:14.314376   40779 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 11:15:14.314383   40779 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 11:15:14.314405   40779 command_runner.go:130] > # runtime_type = "oci"
	I0729 11:15:14.314411   40779 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 11:15:14.314416   40779 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 11:15:14.314422   40779 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 11:15:14.314426   40779 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 11:15:14.314432   40779 command_runner.go:130] > # monitor_env = []
	I0729 11:15:14.314437   40779 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 11:15:14.314443   40779 command_runner.go:130] > # allowed_annotations = []
	I0729 11:15:14.314449   40779 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 11:15:14.314454   40779 command_runner.go:130] > # Where:
	I0729 11:15:14.314459   40779 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 11:15:14.314465   40779 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 11:15:14.314472   40779 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 11:15:14.314478   40779 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 11:15:14.314484   40779 command_runner.go:130] > #   in $PATH.
	I0729 11:15:14.314490   40779 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 11:15:14.314497   40779 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 11:15:14.314503   40779 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 11:15:14.314508   40779 command_runner.go:130] > #   state.
	I0729 11:15:14.314514   40779 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 11:15:14.314521   40779 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 11:15:14.314527   40779 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 11:15:14.314535   40779 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 11:15:14.314543   40779 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 11:15:14.314549   40779 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 11:15:14.314558   40779 command_runner.go:130] > #   The currently recognized values are:
	I0729 11:15:14.314564   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 11:15:14.314573   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 11:15:14.314581   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 11:15:14.314587   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 11:15:14.314596   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 11:15:14.314604   40779 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 11:15:14.314612   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 11:15:14.314620   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 11:15:14.314628   40779 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 11:15:14.314634   40779 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 11:15:14.314640   40779 command_runner.go:130] > #   deprecated option "conmon".
	I0729 11:15:14.314646   40779 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 11:15:14.314653   40779 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 11:15:14.314659   40779 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 11:15:14.314671   40779 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 11:15:14.314679   40779 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 11:15:14.314686   40779 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 11:15:14.314692   40779 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 11:15:14.314712   40779 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 11:15:14.314718   40779 command_runner.go:130] > #
	I0729 11:15:14.314728   40779 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 11:15:14.314733   40779 command_runner.go:130] > #
	I0729 11:15:14.314742   40779 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 11:15:14.314750   40779 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 11:15:14.314756   40779 command_runner.go:130] > #
	I0729 11:15:14.314762   40779 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 11:15:14.314770   40779 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 11:15:14.314774   40779 command_runner.go:130] > #
	I0729 11:15:14.314780   40779 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 11:15:14.314786   40779 command_runner.go:130] > # feature.
	I0729 11:15:14.314789   40779 command_runner.go:130] > #
	I0729 11:15:14.314797   40779 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 11:15:14.314803   40779 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 11:15:14.314813   40779 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 11:15:14.314823   40779 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 11:15:14.314831   40779 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 11:15:14.314835   40779 command_runner.go:130] > #
	I0729 11:15:14.314841   40779 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 11:15:14.314849   40779 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 11:15:14.314852   40779 command_runner.go:130] > #
	I0729 11:15:14.314858   40779 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 11:15:14.314866   40779 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 11:15:14.314869   40779 command_runner.go:130] > #
	I0729 11:15:14.314875   40779 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 11:15:14.314882   40779 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 11:15:14.314885   40779 command_runner.go:130] > # limitation.
	I0729 11:15:14.314893   40779 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 11:15:14.314900   40779 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 11:15:14.314904   40779 command_runner.go:130] > runtime_type = "oci"
	I0729 11:15:14.314909   40779 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 11:15:14.314912   40779 command_runner.go:130] > runtime_config_path = ""
	I0729 11:15:14.314919   40779 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 11:15:14.314923   40779 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 11:15:14.314927   40779 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 11:15:14.314933   40779 command_runner.go:130] > monitor_env = [
	I0729 11:15:14.314939   40779 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 11:15:14.314943   40779 command_runner.go:130] > ]
	I0729 11:15:14.314948   40779 command_runner.go:130] > privileged_without_host_devices = false
	I0729 11:15:14.314956   40779 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 11:15:14.314963   40779 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 11:15:14.314969   40779 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 11:15:14.314978   40779 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 11:15:14.314987   40779 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 11:15:14.314995   40779 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 11:15:14.315004   40779 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 11:15:14.315013   40779 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 11:15:14.315019   40779 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 11:15:14.315026   40779 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 11:15:14.315030   40779 command_runner.go:130] > # Example:
	I0729 11:15:14.315034   40779 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 11:15:14.315039   40779 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 11:15:14.315046   40779 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 11:15:14.315050   40779 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 11:15:14.315054   40779 command_runner.go:130] > # cpuset = 0
	I0729 11:15:14.315057   40779 command_runner.go:130] > # cpushares = "0-1"
	I0729 11:15:14.315063   40779 command_runner.go:130] > # Where:
	I0729 11:15:14.315067   40779 command_runner.go:130] > # The workload name is workload-type.
	I0729 11:15:14.315073   40779 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 11:15:14.315078   40779 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 11:15:14.315083   40779 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 11:15:14.315091   40779 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 11:15:14.315098   40779 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 11:15:14.315103   40779 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 11:15:14.315109   40779 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 11:15:14.315115   40779 command_runner.go:130] > # Default value is set to true
	I0729 11:15:14.315120   40779 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 11:15:14.315127   40779 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 11:15:14.315134   40779 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 11:15:14.315138   40779 command_runner.go:130] > # Default value is set to 'false'
	I0729 11:15:14.315145   40779 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 11:15:14.315151   40779 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 11:15:14.315157   40779 command_runner.go:130] > #
	I0729 11:15:14.315163   40779 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 11:15:14.315171   40779 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 11:15:14.315179   40779 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 11:15:14.315187   40779 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 11:15:14.315195   40779 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 11:15:14.315198   40779 command_runner.go:130] > [crio.image]
	I0729 11:15:14.315206   40779 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 11:15:14.315210   40779 command_runner.go:130] > # default_transport = "docker://"
	I0729 11:15:14.315218   40779 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 11:15:14.315226   40779 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 11:15:14.315232   40779 command_runner.go:130] > # global_auth_file = ""
	I0729 11:15:14.315237   40779 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 11:15:14.315245   40779 command_runner.go:130] > # This option supports live configuration reload.
	I0729 11:15:14.315251   40779 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 11:15:14.315257   40779 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 11:15:14.315266   40779 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 11:15:14.315271   40779 command_runner.go:130] > # This option supports live configuration reload.
	I0729 11:15:14.315277   40779 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 11:15:14.315284   40779 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 11:15:14.315290   40779 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 11:15:14.315297   40779 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 11:15:14.315306   40779 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 11:15:14.315310   40779 command_runner.go:130] > # pause_command = "/pause"
	I0729 11:15:14.315318   40779 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 11:15:14.315325   40779 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 11:15:14.315333   40779 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 11:15:14.315344   40779 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 11:15:14.315352   40779 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 11:15:14.315360   40779 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 11:15:14.315364   40779 command_runner.go:130] > # pinned_images = [
	I0729 11:15:14.315370   40779 command_runner.go:130] > # ]
	I0729 11:15:14.315376   40779 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 11:15:14.315384   40779 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 11:15:14.315391   40779 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 11:15:14.315399   40779 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 11:15:14.315406   40779 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 11:15:14.315412   40779 command_runner.go:130] > # signature_policy = ""
	I0729 11:15:14.315417   40779 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 11:15:14.315425   40779 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 11:15:14.315433   40779 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 11:15:14.315440   40779 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 11:15:14.315447   40779 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 11:15:14.315451   40779 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 11:15:14.315459   40779 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 11:15:14.315467   40779 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 11:15:14.315473   40779 command_runner.go:130] > # changing them here.
	I0729 11:15:14.315477   40779 command_runner.go:130] > # insecure_registries = [
	I0729 11:15:14.315482   40779 command_runner.go:130] > # ]
	I0729 11:15:14.315489   40779 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 11:15:14.315497   40779 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 11:15:14.315503   40779 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 11:15:14.315508   40779 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 11:15:14.315514   40779 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 11:15:14.315522   40779 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 11:15:14.315528   40779 command_runner.go:130] > # CNI plugins.
	I0729 11:15:14.315531   40779 command_runner.go:130] > [crio.network]
	I0729 11:15:14.315539   40779 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 11:15:14.315544   40779 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 11:15:14.315550   40779 command_runner.go:130] > # cni_default_network = ""
	I0729 11:15:14.315556   40779 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 11:15:14.315564   40779 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 11:15:14.315569   40779 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 11:15:14.315575   40779 command_runner.go:130] > # plugin_dirs = [
	I0729 11:15:14.315578   40779 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 11:15:14.315583   40779 command_runner.go:130] > # ]
	I0729 11:15:14.315589   40779 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 11:15:14.315595   40779 command_runner.go:130] > [crio.metrics]
	I0729 11:15:14.315600   40779 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 11:15:14.315606   40779 command_runner.go:130] > enable_metrics = true
	I0729 11:15:14.315610   40779 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 11:15:14.315616   40779 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 11:15:14.315623   40779 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 11:15:14.315630   40779 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 11:15:14.315636   40779 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 11:15:14.315641   40779 command_runner.go:130] > # metrics_collectors = [
	I0729 11:15:14.315645   40779 command_runner.go:130] > # 	"operations",
	I0729 11:15:14.315652   40779 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 11:15:14.315656   40779 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 11:15:14.315661   40779 command_runner.go:130] > # 	"operations_errors",
	I0729 11:15:14.315665   40779 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 11:15:14.315671   40779 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 11:15:14.315675   40779 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 11:15:14.315681   40779 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 11:15:14.315685   40779 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 11:15:14.315693   40779 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 11:15:14.315697   40779 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 11:15:14.315703   40779 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 11:15:14.315707   40779 command_runner.go:130] > # 	"containers_oom_total",
	I0729 11:15:14.315713   40779 command_runner.go:130] > # 	"containers_oom",
	I0729 11:15:14.315717   40779 command_runner.go:130] > # 	"processes_defunct",
	I0729 11:15:14.315723   40779 command_runner.go:130] > # 	"operations_total",
	I0729 11:15:14.315728   40779 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 11:15:14.315734   40779 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 11:15:14.315738   40779 command_runner.go:130] > # 	"operations_errors_total",
	I0729 11:15:14.315744   40779 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 11:15:14.315748   40779 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 11:15:14.315755   40779 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 11:15:14.315759   40779 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 11:15:14.315768   40779 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 11:15:14.315774   40779 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 11:15:14.315778   40779 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 11:15:14.315785   40779 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 11:15:14.315788   40779 command_runner.go:130] > # ]
	I0729 11:15:14.315793   40779 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 11:15:14.315799   40779 command_runner.go:130] > # metrics_port = 9090
	I0729 11:15:14.315803   40779 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 11:15:14.315807   40779 command_runner.go:130] > # metrics_socket = ""
	I0729 11:15:14.315815   40779 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 11:15:14.315820   40779 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 11:15:14.315829   40779 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 11:15:14.315833   40779 command_runner.go:130] > # certificate on any modification event.
	I0729 11:15:14.315839   40779 command_runner.go:130] > # metrics_cert = ""
	I0729 11:15:14.315844   40779 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 11:15:14.315850   40779 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 11:15:14.315854   40779 command_runner.go:130] > # metrics_key = ""
	I0729 11:15:14.315860   40779 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 11:15:14.315864   40779 command_runner.go:130] > [crio.tracing]
	I0729 11:15:14.315870   40779 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 11:15:14.315876   40779 command_runner.go:130] > # enable_tracing = false
	I0729 11:15:14.315881   40779 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 11:15:14.315888   40779 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 11:15:14.315895   40779 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 11:15:14.315899   40779 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 11:15:14.315903   40779 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 11:15:14.315906   40779 command_runner.go:130] > [crio.nri]
	I0729 11:15:14.315914   40779 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 11:15:14.315920   40779 command_runner.go:130] > # enable_nri = false
	I0729 11:15:14.315924   40779 command_runner.go:130] > # NRI socket to listen on.
	I0729 11:15:14.315930   40779 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 11:15:14.315934   40779 command_runner.go:130] > # NRI plugin directory to use.
	I0729 11:15:14.315941   40779 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 11:15:14.315946   40779 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 11:15:14.315953   40779 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 11:15:14.315959   40779 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 11:15:14.315965   40779 command_runner.go:130] > # nri_disable_connections = false
	I0729 11:15:14.315970   40779 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 11:15:14.315976   40779 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 11:15:14.315981   40779 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 11:15:14.315988   40779 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 11:15:14.315994   40779 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 11:15:14.315999   40779 command_runner.go:130] > [crio.stats]
	I0729 11:15:14.316008   40779 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 11:15:14.316014   40779 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 11:15:14.316019   40779 command_runner.go:130] > # stats_collection_period = 0
	I0729 11:15:14.316139   40779 cni.go:84] Creating CNI manager for ""
	I0729 11:15:14.316151   40779 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 11:15:14.316162   40779 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:15:14.316187   40779 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-893477 NodeName:multinode-893477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:15:14.316320   40779 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-893477"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:15:14.316377   40779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:15:14.326348   40779 command_runner.go:130] > kubeadm
	I0729 11:15:14.326364   40779 command_runner.go:130] > kubectl
	I0729 11:15:14.326368   40779 command_runner.go:130] > kubelet
	I0729 11:15:14.326385   40779 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:15:14.326432   40779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:15:14.335914   40779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 11:15:14.353920   40779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:15:14.371873   40779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 11:15:14.389947   40779 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I0729 11:15:14.400249   40779 command_runner.go:130] > 192.168.39.159	control-plane.minikube.internal
	I0729 11:15:14.400780   40779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:15:14.554867   40779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:15:14.571677   40779 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477 for IP: 192.168.39.159
	I0729 11:15:14.571716   40779 certs.go:194] generating shared ca certs ...
	I0729 11:15:14.571739   40779 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:15:14.572028   40779 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:15:14.572076   40779 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:15:14.572095   40779 certs.go:256] generating profile certs ...
	I0729 11:15:14.572184   40779 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/client.key
	I0729 11:15:14.572249   40779 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/apiserver.key.f37b8ebe
	I0729 11:15:14.572285   40779 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/proxy-client.key
	I0729 11:15:14.572295   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 11:15:14.572306   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 11:15:14.572318   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 11:15:14.572331   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 11:15:14.572343   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 11:15:14.572355   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 11:15:14.572367   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 11:15:14.572379   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 11:15:14.572439   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:15:14.572467   40779 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:15:14.572477   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:15:14.572533   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:15:14.572560   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:15:14.572586   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:15:14.572623   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:15:14.572652   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /usr/share/ca-certificates/110642.pem
	I0729 11:15:14.572665   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:15:14.572679   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem -> /usr/share/ca-certificates/11064.pem
	I0729 11:15:14.573340   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:15:14.601525   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:15:14.627572   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:15:14.654432   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:15:14.680755   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 11:15:14.706732   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:15:14.733250   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:15:14.758963   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:15:14.784772   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:15:14.811288   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:15:14.836042   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:15:14.861228   40779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:15:14.880464   40779 ssh_runner.go:195] Run: openssl version
	I0729 11:15:14.886602   40779 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 11:15:14.886680   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:15:14.897952   40779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:15:14.902459   40779 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:15:14.902508   40779 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:15:14.902556   40779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:15:14.908053   40779 command_runner.go:130] > 3ec20f2e
	I0729 11:15:14.908225   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:15:14.917531   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:15:14.928671   40779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:15:14.933271   40779 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:15:14.933307   40779 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:15:14.933348   40779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:15:14.938728   40779 command_runner.go:130] > b5213941
	I0729 11:15:14.938860   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:15:14.948422   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:15:14.960192   40779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:15:14.964961   40779 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:15:14.965073   40779 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:15:14.965131   40779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:15:14.971088   40779 command_runner.go:130] > 51391683
	I0729 11:15:14.971256   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:15:14.981240   40779 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:15:14.986226   40779 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:15:14.986249   40779 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 11:15:14.986255   40779 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0729 11:15:14.986261   40779 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 11:15:14.986268   40779 command_runner.go:130] > Access: 2024-07-29 11:08:01.009237606 +0000
	I0729 11:15:14.986275   40779 command_runner.go:130] > Modify: 2024-07-29 11:08:01.009237606 +0000
	I0729 11:15:14.986281   40779 command_runner.go:130] > Change: 2024-07-29 11:08:01.009237606 +0000
	I0729 11:15:14.986288   40779 command_runner.go:130] >  Birth: 2024-07-29 11:08:01.009237606 +0000
	I0729 11:15:14.986368   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:15:14.992266   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:14.992424   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:15:14.998036   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:14.998217   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:15:15.003883   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:15.004126   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:15:15.009785   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:15.009854   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:15:15.015445   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:15.015580   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:15:15.021498   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:15.021575   40779 kubeadm.go:392] StartCluster: {Name:multinode-893477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-893477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:15:15.021711   40779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:15:15.021781   40779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:15:15.069049   40779 command_runner.go:130] > 431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a
	I0729 11:15:15.069085   40779 command_runner.go:130] > e2275ef3de0527c1700a65468ea19e03300aff678da1429f9f469630c64ca2b3
	I0729 11:15:15.069094   40779 command_runner.go:130] > df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31
	I0729 11:15:15.069112   40779 command_runner.go:130] > 29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b
	I0729 11:15:15.069122   40779 command_runner.go:130] > d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f
	I0729 11:15:15.069131   40779 command_runner.go:130] > 7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc
	I0729 11:15:15.069140   40779 command_runner.go:130] > eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb
	I0729 11:15:15.069152   40779 command_runner.go:130] > 8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309
	I0729 11:15:15.069162   40779 command_runner.go:130] > 15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f
	I0729 11:15:15.069189   40779 cri.go:89] found id: "431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a"
	I0729 11:15:15.069201   40779 cri.go:89] found id: "e2275ef3de0527c1700a65468ea19e03300aff678da1429f9f469630c64ca2b3"
	I0729 11:15:15.069207   40779 cri.go:89] found id: "df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31"
	I0729 11:15:15.069216   40779 cri.go:89] found id: "29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b"
	I0729 11:15:15.069224   40779 cri.go:89] found id: "d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f"
	I0729 11:15:15.069230   40779 cri.go:89] found id: "7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc"
	I0729 11:15:15.069239   40779 cri.go:89] found id: "eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb"
	I0729 11:15:15.069247   40779 cri.go:89] found id: "8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309"
	I0729 11:15:15.069252   40779 cri.go:89] found id: "15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f"
	I0729 11:15:15.069264   40779 cri.go:89] found id: ""
	I0729 11:15:15.069319   40779 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.657383158Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722251822657359132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d101885-664c-4f52-a18c-3477015b960e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.660119781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4cec55f-a456-4606-8972-2391afe49d0b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.660177738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4cec55f-a456-4606-8972-2391afe49d0b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.660537578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b10e2ac7fb746b7117c59621a2788d2d1144f4f8c95d91a879ac2ba66faab3f7,PodSandboxId:e7943e0a40a146abda5d199c07edbe8884cd90ba7cf86ad7e92f674bed5b144f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722251755206578093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470c02518ecee49412e9b9eed91d3c4d4af412868b459293da7635e07e146987,PodSandboxId:c66fb57672e37bc31fbb10ce705d175ccfef7b37539f4d58cc305a9b7ae9e501,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722251721688049335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3042fde14486b705b28da130ff41d9ac7c75131a17d505e38bfef6bf811550bc,PodSandboxId:ec088cad8f62e673fbbd4a1b870ed19f6c9007df61431b83d936338722f8612e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722251721599497611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ee6daa9db0558fccaaadbeaa6a763cb5d6b97ae7ac8c158994f837e5c25e33,PodSandboxId:efe7df772093aca4c3aecf8895741b03d5754787c05f7b8b9ac843cb6ee61623,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722251721499115295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},An
notations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4199b2f23de6b892f3ca152bbba39d826902dd855e6940c3898b8631c1dcc33a,PodSandboxId:f519bf5dbbf07d2858e003fca9c1280b5e93b5fa996cc90389a6b46299cbf723,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722251721440522033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adc266ca6fcf70f19b195a4ca13985c07d8ecc72c0be87de39850cd9d4cdbb6,PodSandboxId:0be8accd1ddf2ffe9e4af84c17472af9658b6e8383806a29f23f1464fc267bad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722251717721146762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9a342efce14c9d8,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f55feb77a82b2a13d3cf50733a2c7b2634187f93114a9e63d48a4c7c8983f9,PodSandboxId:372138dd092b599374fbee256aaf32a5032ce5ad1f2cbf0bf269cb0353cba131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722251717746905820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff88030177c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78
f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cade466a5bed94e877889779a34023013cf91f3b64b0ebadfbb0bd2afed3ad,PodSandboxId:e61c14c0e97abb34d0322738c6cff9bc37e0ff825200d8c6919725a9fe39bf5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722251717724372822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:map[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b645b366743a37de89c3a43ff6d94b7a323259dce39ec144aa1db107fc0f1d64,PodSandboxId:c083d38f7905e9d535798499678481a381faccd541bc97b632cdd38f87f1a56a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722251717602039528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a,PodSandboxId:1e316fe6f649d7b60ccc7aa524ac9079f6184a0b2551e0563080fac19e7acb69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722251703554414414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db46dd5cb41579f410c6229df194fdd536bfe3d3b9408eaaf9dc6f39e6625122,PodSandboxId:a6429c897a26758c7edb0bb3194ce24f4c6a7536779a68c2d85ed45f34c0f8dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722251381911711280,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31,PodSandboxId:aa7afdbbe82af6f01b4b3a65150edb1fa82e143183d00b1b3c86bb4cc7c63ea5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722251321011905054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},Annotations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b,PodSandboxId:4ae79f01a6ff4a4b4602739c68fa9d99d830375bacb4cc19b9a5bb1b2dc752d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722251308940286613,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f,PodSandboxId:4e49bfac1b2cca75d79e16c175bd751ff1e25a2fc4f6d9e7003e737fddf0391c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722251305540324772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.kubernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc,PodSandboxId:30c1d8f8284ba195c506a74da35fd657b8aee0a436d9fc910b8767177e514445,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722251284922178090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff8803017
7c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb,PodSandboxId:fc794e0326fb4c50ece301cac670bd866c3d713cb969d00c08215e3825048079,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722251284848727943,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9
a342efce14c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f,PodSandboxId:04173721ae531cee94345d37f21955b5adcc71e70e8b043c7a439aeb5f5c555b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722251284809903498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309,PodSandboxId:89e865ac40d062b1017359a3a72b064bc280e9fdb4cb4c06a083cd7d7d9c0b33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722251284811315992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4cec55f-a456-4606-8972-2391afe49d0b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.704533087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1eac7bc-9615-4274-8ebf-6f8898bef75b name=/runtime.v1.RuntimeService/Version
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.704606174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1eac7bc-9615-4274-8ebf-6f8898bef75b name=/runtime.v1.RuntimeService/Version
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.706039208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4248d23a-5818-49f5-bf4c-3570706c2dba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.706885271Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722251822706811858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4248d23a-5818-49f5-bf4c-3570706c2dba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.707621629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b15c8ada-6779-4ba0-a380-31d1db4c1d6e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.707675207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b15c8ada-6779-4ba0-a380-31d1db4c1d6e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.708104002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b10e2ac7fb746b7117c59621a2788d2d1144f4f8c95d91a879ac2ba66faab3f7,PodSandboxId:e7943e0a40a146abda5d199c07edbe8884cd90ba7cf86ad7e92f674bed5b144f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722251755206578093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470c02518ecee49412e9b9eed91d3c4d4af412868b459293da7635e07e146987,PodSandboxId:c66fb57672e37bc31fbb10ce705d175ccfef7b37539f4d58cc305a9b7ae9e501,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722251721688049335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3042fde14486b705b28da130ff41d9ac7c75131a17d505e38bfef6bf811550bc,PodSandboxId:ec088cad8f62e673fbbd4a1b870ed19f6c9007df61431b83d936338722f8612e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722251721599497611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ee6daa9db0558fccaaadbeaa6a763cb5d6b97ae7ac8c158994f837e5c25e33,PodSandboxId:efe7df772093aca4c3aecf8895741b03d5754787c05f7b8b9ac843cb6ee61623,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722251721499115295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},An
notations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4199b2f23de6b892f3ca152bbba39d826902dd855e6940c3898b8631c1dcc33a,PodSandboxId:f519bf5dbbf07d2858e003fca9c1280b5e93b5fa996cc90389a6b46299cbf723,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722251721440522033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adc266ca6fcf70f19b195a4ca13985c07d8ecc72c0be87de39850cd9d4cdbb6,PodSandboxId:0be8accd1ddf2ffe9e4af84c17472af9658b6e8383806a29f23f1464fc267bad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722251717721146762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9a342efce14c9d8,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f55feb77a82b2a13d3cf50733a2c7b2634187f93114a9e63d48a4c7c8983f9,PodSandboxId:372138dd092b599374fbee256aaf32a5032ce5ad1f2cbf0bf269cb0353cba131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722251717746905820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff88030177c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78
f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cade466a5bed94e877889779a34023013cf91f3b64b0ebadfbb0bd2afed3ad,PodSandboxId:e61c14c0e97abb34d0322738c6cff9bc37e0ff825200d8c6919725a9fe39bf5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722251717724372822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:map[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b645b366743a37de89c3a43ff6d94b7a323259dce39ec144aa1db107fc0f1d64,PodSandboxId:c083d38f7905e9d535798499678481a381faccd541bc97b632cdd38f87f1a56a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722251717602039528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a,PodSandboxId:1e316fe6f649d7b60ccc7aa524ac9079f6184a0b2551e0563080fac19e7acb69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722251703554414414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db46dd5cb41579f410c6229df194fdd536bfe3d3b9408eaaf9dc6f39e6625122,PodSandboxId:a6429c897a26758c7edb0bb3194ce24f4c6a7536779a68c2d85ed45f34c0f8dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722251381911711280,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31,PodSandboxId:aa7afdbbe82af6f01b4b3a65150edb1fa82e143183d00b1b3c86bb4cc7c63ea5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722251321011905054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},Annotations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b,PodSandboxId:4ae79f01a6ff4a4b4602739c68fa9d99d830375bacb4cc19b9a5bb1b2dc752d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722251308940286613,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f,PodSandboxId:4e49bfac1b2cca75d79e16c175bd751ff1e25a2fc4f6d9e7003e737fddf0391c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722251305540324772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.kubernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc,PodSandboxId:30c1d8f8284ba195c506a74da35fd657b8aee0a436d9fc910b8767177e514445,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722251284922178090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff8803017
7c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb,PodSandboxId:fc794e0326fb4c50ece301cac670bd866c3d713cb969d00c08215e3825048079,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722251284848727943,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9
a342efce14c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f,PodSandboxId:04173721ae531cee94345d37f21955b5adcc71e70e8b043c7a439aeb5f5c555b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722251284809903498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309,PodSandboxId:89e865ac40d062b1017359a3a72b064bc280e9fdb4cb4c06a083cd7d7d9c0b33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722251284811315992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b15c8ada-6779-4ba0-a380-31d1db4c1d6e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.760075680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ec277a2-d6a5-4366-a4d9-18d1e4bf5778 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.760188345Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ec277a2-d6a5-4366-a4d9-18d1e4bf5778 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.761887296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fb8e0a6-7d31-42d5-8736-d3f075b6040c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.762837786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722251822762810765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fb8e0a6-7d31-42d5-8736-d3f075b6040c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.763664939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69b5a998-13da-4a99-bcf0-8db20251973c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.763783955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69b5a998-13da-4a99-bcf0-8db20251973c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.764177323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b10e2ac7fb746b7117c59621a2788d2d1144f4f8c95d91a879ac2ba66faab3f7,PodSandboxId:e7943e0a40a146abda5d199c07edbe8884cd90ba7cf86ad7e92f674bed5b144f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722251755206578093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470c02518ecee49412e9b9eed91d3c4d4af412868b459293da7635e07e146987,PodSandboxId:c66fb57672e37bc31fbb10ce705d175ccfef7b37539f4d58cc305a9b7ae9e501,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722251721688049335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3042fde14486b705b28da130ff41d9ac7c75131a17d505e38bfef6bf811550bc,PodSandboxId:ec088cad8f62e673fbbd4a1b870ed19f6c9007df61431b83d936338722f8612e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722251721599497611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ee6daa9db0558fccaaadbeaa6a763cb5d6b97ae7ac8c158994f837e5c25e33,PodSandboxId:efe7df772093aca4c3aecf8895741b03d5754787c05f7b8b9ac843cb6ee61623,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722251721499115295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},An
notations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4199b2f23de6b892f3ca152bbba39d826902dd855e6940c3898b8631c1dcc33a,PodSandboxId:f519bf5dbbf07d2858e003fca9c1280b5e93b5fa996cc90389a6b46299cbf723,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722251721440522033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adc266ca6fcf70f19b195a4ca13985c07d8ecc72c0be87de39850cd9d4cdbb6,PodSandboxId:0be8accd1ddf2ffe9e4af84c17472af9658b6e8383806a29f23f1464fc267bad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722251717721146762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9a342efce14c9d8,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f55feb77a82b2a13d3cf50733a2c7b2634187f93114a9e63d48a4c7c8983f9,PodSandboxId:372138dd092b599374fbee256aaf32a5032ce5ad1f2cbf0bf269cb0353cba131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722251717746905820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff88030177c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78
f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cade466a5bed94e877889779a34023013cf91f3b64b0ebadfbb0bd2afed3ad,PodSandboxId:e61c14c0e97abb34d0322738c6cff9bc37e0ff825200d8c6919725a9fe39bf5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722251717724372822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:map[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b645b366743a37de89c3a43ff6d94b7a323259dce39ec144aa1db107fc0f1d64,PodSandboxId:c083d38f7905e9d535798499678481a381faccd541bc97b632cdd38f87f1a56a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722251717602039528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a,PodSandboxId:1e316fe6f649d7b60ccc7aa524ac9079f6184a0b2551e0563080fac19e7acb69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722251703554414414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db46dd5cb41579f410c6229df194fdd536bfe3d3b9408eaaf9dc6f39e6625122,PodSandboxId:a6429c897a26758c7edb0bb3194ce24f4c6a7536779a68c2d85ed45f34c0f8dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722251381911711280,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31,PodSandboxId:aa7afdbbe82af6f01b4b3a65150edb1fa82e143183d00b1b3c86bb4cc7c63ea5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722251321011905054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},Annotations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b,PodSandboxId:4ae79f01a6ff4a4b4602739c68fa9d99d830375bacb4cc19b9a5bb1b2dc752d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722251308940286613,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f,PodSandboxId:4e49bfac1b2cca75d79e16c175bd751ff1e25a2fc4f6d9e7003e737fddf0391c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722251305540324772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.kubernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc,PodSandboxId:30c1d8f8284ba195c506a74da35fd657b8aee0a436d9fc910b8767177e514445,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722251284922178090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff8803017
7c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb,PodSandboxId:fc794e0326fb4c50ece301cac670bd866c3d713cb969d00c08215e3825048079,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722251284848727943,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9
a342efce14c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f,PodSandboxId:04173721ae531cee94345d37f21955b5adcc71e70e8b043c7a439aeb5f5c555b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722251284809903498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309,PodSandboxId:89e865ac40d062b1017359a3a72b064bc280e9fdb4cb4c06a083cd7d7d9c0b33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722251284811315992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69b5a998-13da-4a99-bcf0-8db20251973c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.808933998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c164e8a9-bdbc-44b4-9b37-6b8260687687 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.809008931Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c164e8a9-bdbc-44b4-9b37-6b8260687687 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.810235032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=965731e6-367a-440f-8f3e-9a9a3e87da3a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.810721597Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722251822810696153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=965731e6-367a-440f-8f3e-9a9a3e87da3a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.811571978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e286d4ec-35f8-4dcb-bd15-41df4fa7490e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.811650448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e286d4ec-35f8-4dcb-bd15-41df4fa7490e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:17:02 multinode-893477 crio[3012]: time="2024-07-29 11:17:02.812115429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b10e2ac7fb746b7117c59621a2788d2d1144f4f8c95d91a879ac2ba66faab3f7,PodSandboxId:e7943e0a40a146abda5d199c07edbe8884cd90ba7cf86ad7e92f674bed5b144f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722251755206578093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470c02518ecee49412e9b9eed91d3c4d4af412868b459293da7635e07e146987,PodSandboxId:c66fb57672e37bc31fbb10ce705d175ccfef7b37539f4d58cc305a9b7ae9e501,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722251721688049335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3042fde14486b705b28da130ff41d9ac7c75131a17d505e38bfef6bf811550bc,PodSandboxId:ec088cad8f62e673fbbd4a1b870ed19f6c9007df61431b83d936338722f8612e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722251721599497611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ee6daa9db0558fccaaadbeaa6a763cb5d6b97ae7ac8c158994f837e5c25e33,PodSandboxId:efe7df772093aca4c3aecf8895741b03d5754787c05f7b8b9ac843cb6ee61623,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722251721499115295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},An
notations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4199b2f23de6b892f3ca152bbba39d826902dd855e6940c3898b8631c1dcc33a,PodSandboxId:f519bf5dbbf07d2858e003fca9c1280b5e93b5fa996cc90389a6b46299cbf723,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722251721440522033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adc266ca6fcf70f19b195a4ca13985c07d8ecc72c0be87de39850cd9d4cdbb6,PodSandboxId:0be8accd1ddf2ffe9e4af84c17472af9658b6e8383806a29f23f1464fc267bad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722251717721146762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9a342efce14c9d8,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f55feb77a82b2a13d3cf50733a2c7b2634187f93114a9e63d48a4c7c8983f9,PodSandboxId:372138dd092b599374fbee256aaf32a5032ce5ad1f2cbf0bf269cb0353cba131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722251717746905820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff88030177c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78
f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cade466a5bed94e877889779a34023013cf91f3b64b0ebadfbb0bd2afed3ad,PodSandboxId:e61c14c0e97abb34d0322738c6cff9bc37e0ff825200d8c6919725a9fe39bf5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722251717724372822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:map[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b645b366743a37de89c3a43ff6d94b7a323259dce39ec144aa1db107fc0f1d64,PodSandboxId:c083d38f7905e9d535798499678481a381faccd541bc97b632cdd38f87f1a56a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722251717602039528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a,PodSandboxId:1e316fe6f649d7b60ccc7aa524ac9079f6184a0b2551e0563080fac19e7acb69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722251703554414414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db46dd5cb41579f410c6229df194fdd536bfe3d3b9408eaaf9dc6f39e6625122,PodSandboxId:a6429c897a26758c7edb0bb3194ce24f4c6a7536779a68c2d85ed45f34c0f8dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722251381911711280,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31,PodSandboxId:aa7afdbbe82af6f01b4b3a65150edb1fa82e143183d00b1b3c86bb4cc7c63ea5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722251321011905054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},Annotations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b,PodSandboxId:4ae79f01a6ff4a4b4602739c68fa9d99d830375bacb4cc19b9a5bb1b2dc752d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722251308940286613,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f,PodSandboxId:4e49bfac1b2cca75d79e16c175bd751ff1e25a2fc4f6d9e7003e737fddf0391c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722251305540324772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.kubernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc,PodSandboxId:30c1d8f8284ba195c506a74da35fd657b8aee0a436d9fc910b8767177e514445,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722251284922178090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff8803017
7c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb,PodSandboxId:fc794e0326fb4c50ece301cac670bd866c3d713cb969d00c08215e3825048079,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722251284848727943,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9
a342efce14c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f,PodSandboxId:04173721ae531cee94345d37f21955b5adcc71e70e8b043c7a439aeb5f5c555b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722251284809903498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309,PodSandboxId:89e865ac40d062b1017359a3a72b064bc280e9fdb4cb4c06a083cd7d7d9c0b33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722251284811315992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e286d4ec-35f8-4dcb-bd15-41df4fa7490e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b10e2ac7fb746       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   e7943e0a40a14       busybox-fc5497c4f-mq79l
	470c02518ecee       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   c66fb57672e37       kindnet-52h82
	3042fde14486b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   ec088cad8f62e       coredns-7db6d8ff4d-4sc9b
	c9ee6daa9db05       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   efe7df772093a       storage-provisioner
	4199b2f23de6b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   f519bf5dbbf07       kube-proxy-hmnwn
	77f55feb77a82       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   372138dd092b5       etcd-multinode-893477
	60cade466a5be       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   e61c14c0e97ab       kube-apiserver-multinode-893477
	1adc266ca6fcf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   0be8accd1ddf2       kube-controller-manager-multinode-893477
	b645b366743a3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   c083d38f7905e       kube-scheduler-multinode-893477
	431fdf08bfd04       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   1e316fe6f649d       coredns-7db6d8ff4d-4sc9b
	db46dd5cb4157       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   a6429c897a267       busybox-fc5497c4f-mq79l
	df7717af6e7e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   aa7afdbbe82af       storage-provisioner
	29597d58a1c6d       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   4ae79f01a6ff4       kindnet-52h82
	d0df24cda44f0       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   4e49bfac1b2cc       kube-proxy-hmnwn
	7f788b5e98ba1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   30c1d8f8284ba       etcd-multinode-893477
	eeb0db57c1689       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   fc794e0326fb4       kube-controller-manager-multinode-893477
	8ae2b946a03fe       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   89e865ac40d06       kube-apiserver-multinode-893477
	15dc31d0fa5d3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   04173721ae531       kube-scheduler-multinode-893477
	
	
	==> coredns [3042fde14486b705b28da130ff41d9ac7c75131a17d505e38bfef6bf811550bc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51901 - 53317 "HINFO IN 6022383364451738498.608849142257830461. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.018547434s
	
	
	==> coredns [431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:58028 - 53751 "HINFO IN 6936568639892992830.7635242098144383862. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015195349s
	
	
	==> describe nodes <==
	Name:               multinode-893477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-893477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=multinode-893477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_08_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:08:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-893477
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:17:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:15:20 +0000   Mon, 29 Jul 2024 11:08:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:15:20 +0000   Mon, 29 Jul 2024 11:08:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:15:20 +0000   Mon, 29 Jul 2024 11:08:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:15:20 +0000   Mon, 29 Jul 2024 11:08:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    multinode-893477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db0f71eb93f141508bb8d5922f75b2cf
	  System UUID:                db0f71eb-93f1-4150-8bb8-d5922f75b2cf
	  Boot ID:                    2f70c706-9750-4256-aa63-11f58a74942c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mq79l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  kube-system                 coredns-7db6d8ff4d-4sc9b                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m40s
	  kube-system                 etcd-multinode-893477                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m53s
	  kube-system                 kindnet-52h82                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m40s
	  kube-system                 kube-apiserver-multinode-893477             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m53s
	  kube-system                 kube-controller-manager-multinode-893477    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m53s
	  kube-system                 kube-proxy-hmnwn                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 kube-scheduler-multinode-893477             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m37s                kube-proxy       
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 8m54s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m53s                kubelet          Node multinode-893477 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m53s                kubelet          Node multinode-893477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m53s                kubelet          Node multinode-893477 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m41s                node-controller  Node multinode-893477 event: Registered Node multinode-893477 in Controller
	  Normal  NodeReady                8m23s                kubelet          Node multinode-893477 status is now: NodeReady
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 107s)  kubelet          Node multinode-893477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 107s)  kubelet          Node multinode-893477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 107s)  kubelet          Node multinode-893477 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s                  node-controller  Node multinode-893477 event: Registered Node multinode-893477 in Controller
	
	
	Name:               multinode-893477-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-893477-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=multinode-893477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_16_01_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:16:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-893477-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:17:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:16:31 +0000   Mon, 29 Jul 2024 11:16:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:16:31 +0000   Mon, 29 Jul 2024 11:16:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:16:31 +0000   Mon, 29 Jul 2024 11:16:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:16:31 +0000   Mon, 29 Jul 2024 11:16:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    multinode-893477-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aed3968240d448e1afb6fa6c397e5660
	  System UUID:                aed39682-40d4-48e1-afb6-fa6c397e5660
	  Boot ID:                    b92c34db-d281-4862-b264-22c951ce0f87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tgnfq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kindnet-hcg5s              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m47s
	  kube-system                 kube-proxy-ppbjw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m42s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m47s (x3 over 7m47s)  kubelet     Node multinode-893477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m47s (x3 over 7m47s)  kubelet     Node multinode-893477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m47s (x3 over 7m47s)  kubelet     Node multinode-893477-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m27s                  kubelet     Node multinode-893477-m02 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x2 over 63s)      kubelet     Node multinode-893477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x2 over 63s)      kubelet     Node multinode-893477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x2 over 63s)      kubelet     Node multinode-893477-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  63s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                43s                    kubelet     Node multinode-893477-m02 status is now: NodeReady
	
	
	Name:               multinode-893477-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-893477-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=multinode-893477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_16_40_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:16:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-893477-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:17:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:16:59 +0000   Mon, 29 Jul 2024 11:16:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:16:59 +0000   Mon, 29 Jul 2024 11:16:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:16:59 +0000   Mon, 29 Jul 2024 11:16:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:16:59 +0000   Mon, 29 Jul 2024 11:16:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    multinode-893477-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 85a8ec8f62154ebea78538eeb70ed3ad
	  System UUID:                85a8ec8f-6215-4ebe-a785-38eeb70ed3ad
	  Boot ID:                    ab917daf-a12b-4168-b373-69d3ec902b02
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-mfhng       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m49s
	  kube-system                 kube-proxy-pxmtg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m52s                  kube-proxy       
	  Normal  Starting                 6m42s                  kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m49s (x2 over 6m49s)  kubelet          Node multinode-893477-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m49s (x2 over 6m49s)  kubelet          Node multinode-893477-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m49s (x2 over 6m49s)  kubelet          Node multinode-893477-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m27s                  kubelet          Node multinode-893477-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m57s (x2 over 5m57s)  kubelet          Node multinode-893477-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m57s (x2 over 5m57s)  kubelet          Node multinode-893477-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m57s (x2 over 5m57s)  kubelet          Node multinode-893477-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m38s                  kubelet          Node multinode-893477-m03 status is now: NodeReady
	  Normal  Starting                 23s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet          Node multinode-893477-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet          Node multinode-893477-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet          Node multinode-893477-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                    node-controller  Node multinode-893477-m03 event: Registered Node multinode-893477-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-893477-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060265] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.173136] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.147204] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.298975] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.180101] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[Jul29 11:08] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.067055] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.996647] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.086721] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.221283] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.450257] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +5.597061] kauditd_printk_skb: 51 callbacks suppressed
	[Jul29 11:09] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 11:15] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.157920] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +0.241157] systemd-fstab-generator[2900]: Ignoring "noauto" option for root device
	[  +0.166965] systemd-fstab-generator[2948]: Ignoring "noauto" option for root device
	[  +0.330517] systemd-fstab-generator[2992]: Ignoring "noauto" option for root device
	[ +10.437723] systemd-fstab-generator[3119]: Ignoring "noauto" option for root device
	[  +0.089624] kauditd_printk_skb: 110 callbacks suppressed
	[  +2.142360] systemd-fstab-generator[3243]: Ignoring "noauto" option for root device
	[  +4.673588] kauditd_printk_skb: 76 callbacks suppressed
	[ +12.782833] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.432316] systemd-fstab-generator[4095]: Ignoring "noauto" option for root device
	[ +18.588695] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [77f55feb77a82b2a13d3cf50733a2c7b2634187f93114a9e63d48a4c7c8983f9] <==
	{"level":"info","ts":"2024-07-29T11:15:18.206868Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T11:15:18.207046Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T11:15:18.20923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af switched to configuration voters=(17361235931841906351)"}
	{"level":"info","ts":"2024-07-29T11:15:18.211891Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","added-peer-id":"f0ef8018a32f46af","added-peer-peer-urls":["https://192.168.39.159:2380"]}
	{"level":"info","ts":"2024-07-29T11:15:18.212049Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:15:18.212097Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:15:18.218004Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T11:15:18.218316Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f0ef8018a32f46af","initial-advertise-peer-urls":["https://192.168.39.159:2380"],"listen-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.159:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:15:18.218415Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T11:15:18.218565Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-07-29T11:15:18.218626Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-07-29T11:15:19.362166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T11:15:19.362283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:15:19.362342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgPreVoteResp from f0ef8018a32f46af at term 2"}
	{"level":"info","ts":"2024-07-29T11:15:19.362378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T11:15:19.362403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgVoteResp from f0ef8018a32f46af at term 3"}
	{"level":"info","ts":"2024-07-29T11:15:19.362429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became leader at term 3"}
	{"level":"info","ts":"2024-07-29T11:15:19.362458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0ef8018a32f46af elected leader f0ef8018a32f46af at term 3"}
	{"level":"info","ts":"2024-07-29T11:15:19.36783Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f0ef8018a32f46af","local-member-attributes":"{Name:multinode-893477 ClientURLs:[https://192.168.39.159:2379]}","request-path":"/0/members/f0ef8018a32f46af/attributes","cluster-id":"bc02953927cca850","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:15:19.36813Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:15:19.370424Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T11:15:19.372212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:15:19.372456Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:15:19.372498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:15:19.374138Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.159:2379"}
	
	
	==> etcd [7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc] <==
	{"level":"info","ts":"2024-07-29T11:08:06.012731Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:08:06.012695Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:08:06.014622Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.159:2379"}
	{"level":"info","ts":"2024-07-29T11:08:06.015246Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:08:06.015285Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:08:06.016645Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T11:09:15.679347Z","caller":"traceutil/trace.go:171","msg":"trace[471705015] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"194.085273ms","start":"2024-07-29T11:09:15.485219Z","end":"2024-07-29T11:09:15.679305Z","steps":["trace[471705015] 'process raft request'  (duration: 134.369645ms)","trace[471705015] 'compare'  (duration: 59.427068ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:09:15.680149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.534948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T11:09:15.682593Z","caller":"traceutil/trace.go:171","msg":"trace[800302879] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:450; }","duration":"103.066004ms","start":"2024-07-29T11:09:15.579511Z","end":"2024-07-29T11:09:15.682577Z","steps":["trace[800302879] 'agreement among raft nodes before linearized reading'  (duration: 100.546029ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:09:16.542398Z","caller":"traceutil/trace.go:171","msg":"trace[1899739747] linearizableReadLoop","detail":"{readStateIndex:475; appliedIndex:474; }","duration":"239.916032ms","start":"2024-07-29T11:09:16.302459Z","end":"2024-07-29T11:09:16.542375Z","steps":["trace[1899739747] 'read index received'  (duration: 201.573517ms)","trace[1899739747] 'applied index is now lower than readState.Index'  (duration: 38.335624ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:09:16.542589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.111596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-l8gpm\" ","response":"range_response_count:1 size:1337"}
	{"level":"info","ts":"2024-07-29T11:09:16.542646Z","caller":"traceutil/trace.go:171","msg":"trace[1963257127] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-l8gpm; range_end:; response_count:1; response_revision:452; }","duration":"240.19757ms","start":"2024-07-29T11:09:16.302438Z","end":"2024-07-29T11:09:16.542635Z","steps":["trace[1963257127] 'agreement among raft nodes before linearized reading'  (duration: 240.024991ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:09:16.54281Z","caller":"traceutil/trace.go:171","msg":"trace[1893070044] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"240.749192ms","start":"2024-07-29T11:09:16.302047Z","end":"2024-07-29T11:09:16.542796Z","steps":["trace[1893070044] 'process raft request'  (duration: 202.060587ms)","trace[1893070044] 'compare'  (duration: 38.104159ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T11:10:14.915844Z","caller":"traceutil/trace.go:171","msg":"trace[1124866864] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"225.163358ms","start":"2024-07-29T11:10:14.690649Z","end":"2024-07-29T11:10:14.915813Z","steps":["trace[1124866864] 'process raft request'  (duration: 223.452659ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:10:14.916184Z","caller":"traceutil/trace.go:171","msg":"trace[2123130020] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"166.643621ms","start":"2024-07-29T11:10:14.749528Z","end":"2024-07-29T11:10:14.916172Z","steps":["trace[2123130020] 'process raft request'  (duration: 166.154863ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:13:31.697703Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T11:13:31.697904Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-893477","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"]}
	{"level":"warn","ts":"2024-07-29T11:13:31.698037Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:13:31.698141Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:13:31.761237Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.159:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:13:31.761333Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.159:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T11:13:31.761423Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f0ef8018a32f46af","current-leader-member-id":"f0ef8018a32f46af"}
	{"level":"info","ts":"2024-07-29T11:13:31.76397Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-07-29T11:13:31.764114Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-07-29T11:13:31.76414Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-893477","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"]}
	
	
	==> kernel <==
	 11:17:03 up 9 min,  0 users,  load average: 0.61, 0.34, 0.16
	Linux multinode-893477 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b] <==
	I0729 11:12:50.023278       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	I0729 11:13:00.015231       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:13:00.015307       1 main.go:299] handling current node
	I0729 11:13:00.015331       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:13:00.015338       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:13:00.015502       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:13:00.015557       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	I0729 11:13:10.018516       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:13:10.018568       1 main.go:299] handling current node
	I0729 11:13:10.018600       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:13:10.018610       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:13:10.018868       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:13:10.018884       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	I0729 11:13:20.023340       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:13:20.023467       1 main.go:299] handling current node
	I0729 11:13:20.023506       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:13:20.023526       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:13:20.023865       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:13:20.023903       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	I0729 11:13:30.014999       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:13:30.015048       1 main.go:299] handling current node
	I0729 11:13:30.015064       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:13:30.015070       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:13:30.015204       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:13:30.015209       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [470c02518ecee49412e9b9eed91d3c4d4af412868b459293da7635e07e146987] <==
	I0729 11:16:22.727023       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	I0729 11:16:32.727010       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:16:32.727199       1 main.go:299] handling current node
	I0729 11:16:32.727287       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:16:32.727322       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:16:32.727562       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:16:32.727610       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	I0729 11:16:42.726301       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:16:42.726367       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.2.0/24] 
	I0729 11:16:42.726504       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:16:42.726529       1 main.go:299] handling current node
	I0729 11:16:42.726549       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:16:42.726554       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:16:52.726318       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:16:52.726430       1 main.go:299] handling current node
	I0729 11:16:52.726459       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:16:52.726478       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:16:52.726624       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:16:52.726647       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.2.0/24] 
	I0729 11:17:02.727872       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:17:02.727963       1 main.go:299] handling current node
	I0729 11:17:02.727990       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:17:02.727999       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:17:02.728141       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:17:02.728151       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [60cade466a5bed94e877889779a34023013cf91f3b64b0ebadfbb0bd2afed3ad] <==
	I0729 11:15:20.780830       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 11:15:20.790290       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 11:15:20.790330       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 11:15:20.790420       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 11:15:20.791100       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 11:15:20.796361       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 11:15:20.797246       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 11:15:20.798872       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 11:15:20.799257       1 aggregator.go:165] initial CRD sync complete...
	I0729 11:15:20.800093       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 11:15:20.800136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 11:15:20.800162       1 cache.go:39] Caches are synced for autoregister controller
	E0729 11:15:20.815318       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 11:15:20.822542       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 11:15:20.825406       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 11:15:20.825450       1 policy_source.go:224] refreshing policies
	I0729 11:15:20.857311       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 11:15:21.708892       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 11:15:22.870518       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 11:15:23.015524       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 11:15:23.027933       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 11:15:23.114516       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 11:15:23.121444       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 11:15:34.074026       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:15:34.102369       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309] <==
	E0729 11:13:31.713436       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0729 11:13:31.713946       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 11:13:31.714936       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 11:13:31.715030       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0729 11:13:31.715077       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 11:13:31.715114       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0729 11:13:31.715518       1 controller.go:157] Shutting down quota evaluator
	I0729 11:13:31.715565       1 controller.go:176] quota evaluator worker shutdown
	I0729 11:13:31.720190       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 11:13:31.720862       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 11:13:31.726238       1 controller.go:176] quota evaluator worker shutdown
	I0729 11:13:31.726251       1 controller.go:176] quota evaluator worker shutdown
	I0729 11:13:31.726255       1 controller.go:176] quota evaluator worker shutdown
	I0729 11:13:31.726259       1 controller.go:176] quota evaluator worker shutdown
	I0729 11:13:31.731326       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	W0729 11:13:31.732419       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 11:13:31.734290       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0729 11:13:31.736323       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.736721       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.736858       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.736913       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.737023       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.737116       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.737177       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.737224       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1adc266ca6fcf70f19b195a4ca13985c07d8ecc72c0be87de39850cd9d4cdbb6] <==
	I0729 11:15:34.547346       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 11:15:34.547439       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 11:15:34.581221       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 11:15:56.686882       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.702622ms"
	I0729 11:15:56.701531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.110421ms"
	I0729 11:15:56.702150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.04µs"
	I0729 11:15:56.703357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.173µs"
	I0729 11:16:00.907707       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-893477-m02\" does not exist"
	I0729 11:16:00.920269       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-893477-m02" podCIDRs=["10.244.1.0/24"]
	I0729 11:16:02.833884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.617µs"
	I0729 11:16:02.852163       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.394µs"
	I0729 11:16:02.891030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.311µs"
	I0729 11:16:02.900278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.058µs"
	I0729 11:16:02.902950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.733µs"
	I0729 11:16:03.772079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.046µs"
	I0729 11:16:20.686130       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:16:20.705400       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.969µs"
	I0729 11:16:20.720491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.663µs"
	I0729 11:16:24.281493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.9882ms"
	I0729 11:16:24.282562       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.862µs"
	I0729 11:16:39.024136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:16:40.260849       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:16:40.260984       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-893477-m03\" does not exist"
	I0729 11:16:40.273415       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-893477-m03" podCIDRs=["10.244.2.0/24"]
	I0729 11:16:59.836331       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	
	
	==> kube-controller-manager [eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb] <==
	I0729 11:08:42.721952       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0729 11:09:16.584853       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-893477-m02\" does not exist"
	I0729 11:09:16.598111       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-893477-m02" podCIDRs=["10.244.1.0/24"]
	I0729 11:09:17.727185       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-893477-m02"
	I0729 11:09:36.191477       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:09:38.586774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.209794ms"
	I0729 11:09:38.600177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.567859ms"
	I0729 11:09:38.600429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.15µs"
	I0729 11:09:42.359207       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.267346ms"
	I0729 11:09:42.359489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.018µs"
	I0729 11:09:42.454828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.988108ms"
	I0729 11:09:42.455472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.739µs"
	I0729 11:10:14.918950       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-893477-m03\" does not exist"
	I0729 11:10:14.919024       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:10:14.932790       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-893477-m03" podCIDRs=["10.244.2.0/24"]
	I0729 11:10:17.749150       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-893477-m03"
	I0729 11:10:36.142178       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:11:04.739826       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:11:06.187973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:11:06.188467       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-893477-m03\" does not exist"
	I0729 11:11:06.196113       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-893477-m03" podCIDRs=["10.244.3.0/24"]
	I0729 11:11:25.950317       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:12:07.805305       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:12:12.902146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.882904ms"
	I0729 11:12:12.902415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.469µs"
	
	
	==> kube-proxy [4199b2f23de6b892f3ca152bbba39d826902dd855e6940c3898b8631c1dcc33a] <==
	I0729 11:15:21.774257       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:15:21.793715       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.159"]
	I0729 11:15:21.883827       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:15:21.883926       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:15:21.883958       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:15:21.886568       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:15:21.886913       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:15:21.887245       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:15:21.889364       1 config.go:192] "Starting service config controller"
	I0729 11:15:21.889621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:15:21.889982       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:15:21.890017       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:15:21.890514       1 config.go:319] "Starting node config controller"
	I0729 11:15:21.890556       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:15:21.991845       1 shared_informer.go:320] Caches are synced for node config
	I0729 11:15:21.991939       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:15:21.992015       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f] <==
	I0729 11:08:25.732169       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:08:25.747066       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.159"]
	I0729 11:08:25.781907       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:08:25.781975       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:08:25.781992       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:08:25.784947       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:08:25.785174       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:08:25.785205       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:08:25.786784       1 config.go:192] "Starting service config controller"
	I0729 11:08:25.787008       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:08:25.787055       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:08:25.787075       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:08:25.787653       1 config.go:319] "Starting node config controller"
	I0729 11:08:25.787679       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:08:25.888133       1 shared_informer.go:320] Caches are synced for node config
	I0729 11:08:25.888234       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:08:25.888254       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f] <==
	E0729 11:08:07.440381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:08:07.440393       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:08:07.440400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:08:08.291389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:08:08.291437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:08:08.324120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:08:08.324171       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 11:08:08.363548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:08:08.364146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:08:08.507004       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:08:08.507050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:08:08.536309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:08:08.536365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:08:08.590453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:08:08.590578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 11:08:08.627114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:08:08.627166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:08:08.660309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:08:08.660824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:08:08.676113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:08:08.676158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:08:08.775168       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:08:08.775217       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 11:08:11.325202       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 11:13:31.708634       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b645b366743a37de89c3a43ff6d94b7a323259dce39ec144aa1db107fc0f1d64] <==
	I0729 11:15:18.721055       1 serving.go:380] Generated self-signed cert in-memory
	W0729 11:15:20.721149       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 11:15:20.721241       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:15:20.721251       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 11:15:20.721258       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 11:15:20.779179       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 11:15:20.779223       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:15:20.783012       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 11:15:20.783144       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 11:15:20.783174       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:15:20.783189       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 11:15:20.883518       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 11:15:18 multinode-893477 kubelet[3250]: I0729 11:15:18.454145    3250 kubelet_node_status.go:73] "Attempting to register node" node="multinode-893477"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.911263    3250 apiserver.go:52] "Watching apiserver"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.916146    3250 topology_manager.go:215] "Topology Admit Handler" podUID="4b82e9c0-f851-46b5-880b-60e698c16330" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4sc9b"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.916534    3250 topology_manager.go:215] "Topology Admit Handler" podUID="71930213-e103-46cc-8f0e-6f6574c5dd81" podNamespace="kube-system" podName="kindnet-52h82"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.916938    3250 topology_manager.go:215] "Topology Admit Handler" podUID="5442195f-18ec-4f12-b044-8959420929e0" podNamespace="kube-system" podName="kube-proxy-hmnwn"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.917190    3250 topology_manager.go:215] "Topology Admit Handler" podUID="d8307277-6866-4782-af18-0b3af40c2684" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.917803    3250 topology_manager.go:215] "Topology Admit Handler" podUID="35d18e03-abcd-4771-921d-4f3e02d2e156" podNamespace="default" podName="busybox-fc5497c4f-mq79l"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.930692    3250 kubelet_node_status.go:112] "Node was previously registered" node="multinode-893477"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.930820    3250 kubelet_node_status.go:76] "Successfully registered node" node="multinode-893477"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.932269    3250 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.932653    3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/71930213-e103-46cc-8f0e-6f6574c5dd81-cni-cfg\") pod \"kindnet-52h82\" (UID: \"71930213-e103-46cc-8f0e-6f6574c5dd81\") " pod="kube-system/kindnet-52h82"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.932696    3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5442195f-18ec-4f12-b044-8959420929e0-xtables-lock\") pod \"kube-proxy-hmnwn\" (UID: \"5442195f-18ec-4f12-b044-8959420929e0\") " pod="kube-system/kube-proxy-hmnwn"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.932717    3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71930213-e103-46cc-8f0e-6f6574c5dd81-xtables-lock\") pod \"kindnet-52h82\" (UID: \"71930213-e103-46cc-8f0e-6f6574c5dd81\") " pod="kube-system/kindnet-52h82"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.932731    3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71930213-e103-46cc-8f0e-6f6574c5dd81-lib-modules\") pod \"kindnet-52h82\" (UID: \"71930213-e103-46cc-8f0e-6f6574c5dd81\") " pod="kube-system/kindnet-52h82"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.932795    3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5442195f-18ec-4f12-b044-8959420929e0-lib-modules\") pod \"kube-proxy-hmnwn\" (UID: \"5442195f-18ec-4f12-b044-8959420929e0\") " pod="kube-system/kube-proxy-hmnwn"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.932915    3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d8307277-6866-4782-af18-0b3af40c2684-tmp\") pod \"storage-provisioner\" (UID: \"d8307277-6866-4782-af18-0b3af40c2684\") " pod="kube-system/storage-provisioner"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.934306    3250 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 11:15:21 multinode-893477 kubelet[3250]: I0729 11:15:21.019089    3250 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 11:15:23 multinode-893477 kubelet[3250]: I0729 11:15:23.109569    3250 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 11:15:28 multinode-893477 kubelet[3250]: I0729 11:15:28.042969    3250 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 11:16:17 multinode-893477 kubelet[3250]: E0729 11:16:17.005554    3250 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:16:17 multinode-893477 kubelet[3250]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:16:17 multinode-893477 kubelet[3250]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:16:17 multinode-893477 kubelet[3250]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:16:17 multinode-893477 kubelet[3250]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:17:02.364505   41971 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19337-3845/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-893477 -n multinode-893477
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-893477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (335.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 stop
E0729 11:18:03.511789   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-893477 stop: exit status 82 (2m0.464643179s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-893477-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-893477 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-893477 status: exit status 3 (18.79929728s)

                                                
                                                
-- stdout --
	multinode-893477
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-893477-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:19:26.003034   43081 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E0729 11:19:26.003069   43081 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-893477 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-893477 -n multinode-893477
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-893477 logs -n 25: (1.515004193s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m02:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477:/home/docker/cp-test_multinode-893477-m02_multinode-893477.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n multinode-893477 sudo cat                                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-893477-m02_multinode-893477.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m02:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03:/home/docker/cp-test_multinode-893477-m02_multinode-893477-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n multinode-893477-m03 sudo cat                                   | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-893477-m02_multinode-893477-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp testdata/cp-test.txt                                                | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m03:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3316282178/001/cp-test_multinode-893477-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m03:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477:/home/docker/cp-test_multinode-893477-m03_multinode-893477.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n multinode-893477 sudo cat                                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-893477-m03_multinode-893477.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m03:/home/docker/cp-test.txt                       | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m02:/home/docker/cp-test_multinode-893477-m03_multinode-893477-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n multinode-893477-m02 sudo cat                                   | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-893477-m03_multinode-893477-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-893477 node stop m03                                                          | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	| node    | multinode-893477 node start                                                             | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-893477                                                                | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:11 UTC |                     |
	| stop    | -p multinode-893477                                                                     | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:11 UTC |                     |
	| start   | -p multinode-893477                                                                     | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:13 UTC | 29 Jul 24 11:17 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-893477                                                                | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:17 UTC |                     |
	| node    | multinode-893477 node delete                                                            | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:17 UTC | 29 Jul 24 11:17 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-893477 stop                                                                   | multinode-893477 | jenkins | v1.33.1 | 29 Jul 24 11:17 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:13:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:13:30.736169   40779 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:13:30.736282   40779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:13:30.736289   40779 out.go:304] Setting ErrFile to fd 2...
	I0729 11:13:30.736293   40779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:13:30.736484   40779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:13:30.737014   40779 out.go:298] Setting JSON to false
	I0729 11:13:30.737828   40779 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3357,"bootTime":1722248254,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:13:30.737891   40779 start.go:139] virtualization: kvm guest
	I0729 11:13:30.740874   40779 out.go:177] * [multinode-893477] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:13:30.742280   40779 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:13:30.742282   40779 notify.go:220] Checking for updates...
	I0729 11:13:30.745487   40779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:13:30.746878   40779 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:13:30.747910   40779 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:13:30.749206   40779 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:13:30.750978   40779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:13:30.752932   40779 config.go:182] Loaded profile config "multinode-893477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:13:30.753022   40779 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:13:30.753461   40779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:13:30.753528   40779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:13:30.768630   40779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40003
	I0729 11:13:30.769085   40779 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:13:30.769704   40779 main.go:141] libmachine: Using API Version  1
	I0729 11:13:30.769730   40779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:13:30.770012   40779 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:13:30.770174   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:13:30.805799   40779 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:13:30.806929   40779 start.go:297] selected driver: kvm2
	I0729 11:13:30.806944   40779 start.go:901] validating driver "kvm2" against &{Name:multinode-893477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-893477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:13:30.807110   40779 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:13:30.807415   40779 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:13:30.807482   40779 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:13:30.822192   40779 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:13:30.822955   40779 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:13:30.822994   40779 cni.go:84] Creating CNI manager for ""
	I0729 11:13:30.823003   40779 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 11:13:30.823063   40779 start.go:340] cluster config:
	{Name:multinode-893477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-893477 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:13:30.823197   40779 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:13:30.824936   40779 out.go:177] * Starting "multinode-893477" primary control-plane node in "multinode-893477" cluster
	I0729 11:13:30.826241   40779 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:13:30.826270   40779 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 11:13:30.826278   40779 cache.go:56] Caching tarball of preloaded images
	I0729 11:13:30.826343   40779 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:13:30.826352   40779 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:13:30.826505   40779 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/config.json ...
	I0729 11:13:30.826716   40779 start.go:360] acquireMachinesLock for multinode-893477: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:13:30.826766   40779 start.go:364] duration metric: took 26.968µs to acquireMachinesLock for "multinode-893477"
	I0729 11:13:30.826790   40779 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:13:30.826799   40779 fix.go:54] fixHost starting: 
	I0729 11:13:30.827050   40779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:13:30.827089   40779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:13:30.842096   40779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45087
	I0729 11:13:30.842505   40779 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:13:30.842975   40779 main.go:141] libmachine: Using API Version  1
	I0729 11:13:30.842989   40779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:13:30.843228   40779 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:13:30.843402   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:13:30.843566   40779 main.go:141] libmachine: (multinode-893477) Calling .GetState
	I0729 11:13:30.844967   40779 fix.go:112] recreateIfNeeded on multinode-893477: state=Running err=<nil>
	W0729 11:13:30.844987   40779 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:13:30.847047   40779 out.go:177] * Updating the running kvm2 "multinode-893477" VM ...
	I0729 11:13:30.848447   40779 machine.go:94] provisionDockerMachine start ...
	I0729 11:13:30.848468   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:13:30.848697   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:30.850873   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:30.851323   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:30.851352   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:30.851519   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:13:30.851658   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:30.851799   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:30.851951   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:13:30.852108   40779 main.go:141] libmachine: Using SSH client type: native
	I0729 11:13:30.852298   40779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0729 11:13:30.852309   40779 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:13:30.964097   40779 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-893477
	
	I0729 11:13:30.964123   40779 main.go:141] libmachine: (multinode-893477) Calling .GetMachineName
	I0729 11:13:30.964335   40779 buildroot.go:166] provisioning hostname "multinode-893477"
	I0729 11:13:30.964356   40779 main.go:141] libmachine: (multinode-893477) Calling .GetMachineName
	I0729 11:13:30.964511   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:30.967153   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:30.967528   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:30.967564   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:30.967684   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:13:30.967902   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:30.968038   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:30.968205   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:13:30.968366   40779 main.go:141] libmachine: Using SSH client type: native
	I0729 11:13:30.968532   40779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0729 11:13:30.968543   40779 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-893477 && echo "multinode-893477" | sudo tee /etc/hostname
	I0729 11:13:31.090555   40779 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-893477
	
	I0729 11:13:31.090577   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:31.093260   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.093550   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:31.093592   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.093771   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:13:31.093937   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:31.094115   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:31.094247   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:13:31.094421   40779 main.go:141] libmachine: Using SSH client type: native
	I0729 11:13:31.094590   40779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0729 11:13:31.094607   40779 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-893477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-893477/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-893477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:13:31.203820   40779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:13:31.203863   40779 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:13:31.203910   40779 buildroot.go:174] setting up certificates
	I0729 11:13:31.203924   40779 provision.go:84] configureAuth start
	I0729 11:13:31.203940   40779 main.go:141] libmachine: (multinode-893477) Calling .GetMachineName
	I0729 11:13:31.204263   40779 main.go:141] libmachine: (multinode-893477) Calling .GetIP
	I0729 11:13:31.206844   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.207289   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:31.207319   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.207434   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:31.209764   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.210066   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:31.210100   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.210238   40779 provision.go:143] copyHostCerts
	I0729 11:13:31.210266   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:13:31.210303   40779 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:13:31.210310   40779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:13:31.210375   40779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:13:31.210455   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:13:31.210471   40779 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:13:31.210477   40779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:13:31.210501   40779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:13:31.210553   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:13:31.210569   40779 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:13:31.210574   40779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:13:31.210594   40779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:13:31.210649   40779 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.multinode-893477 san=[127.0.0.1 192.168.39.159 localhost minikube multinode-893477]
	I0729 11:13:31.405495   40779 provision.go:177] copyRemoteCerts
	I0729 11:13:31.405549   40779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:13:31.405571   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:31.408459   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.408814   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:31.408845   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.409043   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:13:31.409224   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:31.409365   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:13:31.409489   40779 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/multinode-893477/id_rsa Username:docker}
	I0729 11:13:31.493298   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 11:13:31.493356   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:13:31.519881   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 11:13:31.519939   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 11:13:31.544399   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 11:13:31.544463   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:13:31.569440   40779 provision.go:87] duration metric: took 365.502683ms to configureAuth
	I0729 11:13:31.569470   40779 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:13:31.569738   40779 config.go:182] Loaded profile config "multinode-893477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:13:31.569816   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:13:31.572563   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.573010   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:13:31.573032   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:13:31.573233   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:13:31.573414   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:31.573579   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:13:31.573744   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:13:31.573954   40779 main.go:141] libmachine: Using SSH client type: native
	I0729 11:13:31.574151   40779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0729 11:13:31.574166   40779 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:15:02.417197   40779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:15:02.417221   40779 machine.go:97] duration metric: took 1m31.568761424s to provisionDockerMachine
	I0729 11:15:02.417234   40779 start.go:293] postStartSetup for "multinode-893477" (driver="kvm2")
	I0729 11:15:02.417247   40779 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:15:02.417295   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:15:02.417680   40779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:15:02.417722   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:15:02.420770   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.421200   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:02.421229   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.421369   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:15:02.421544   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:15:02.421745   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:15:02.422025   40779 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/multinode-893477/id_rsa Username:docker}
	I0729 11:15:02.510830   40779 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:15:02.515143   40779 command_runner.go:130] > NAME=Buildroot
	I0729 11:15:02.515160   40779 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 11:15:02.515165   40779 command_runner.go:130] > ID=buildroot
	I0729 11:15:02.515170   40779 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 11:15:02.515175   40779 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 11:15:02.515232   40779 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:15:02.515247   40779 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:15:02.515306   40779 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:15:02.515373   40779 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:15:02.515384   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /etc/ssl/certs/110642.pem
	I0729 11:15:02.515497   40779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:15:02.525877   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:15:02.550244   40779 start.go:296] duration metric: took 132.997653ms for postStartSetup
	I0729 11:15:02.550313   40779 fix.go:56] duration metric: took 1m31.723512391s for fixHost
	I0729 11:15:02.550343   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:15:02.553362   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.553804   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:02.553839   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.553958   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:15:02.554196   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:15:02.554352   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:15:02.554489   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:15:02.554810   40779 main.go:141] libmachine: Using SSH client type: native
	I0729 11:15:02.554992   40779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0729 11:15:02.555004   40779 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:15:02.663819   40779 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722251702.641533815
	
	I0729 11:15:02.663844   40779 fix.go:216] guest clock: 1722251702.641533815
	I0729 11:15:02.663854   40779 fix.go:229] Guest: 2024-07-29 11:15:02.641533815 +0000 UTC Remote: 2024-07-29 11:15:02.550319148 +0000 UTC m=+91.848445825 (delta=91.214667ms)
	I0729 11:15:02.663924   40779 fix.go:200] guest clock delta is within tolerance: 91.214667ms
	I0729 11:15:02.663933   40779 start.go:83] releasing machines lock for "multinode-893477", held for 1m31.837153851s
	I0729 11:15:02.663973   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:15:02.664292   40779 main.go:141] libmachine: (multinode-893477) Calling .GetIP
	I0729 11:15:02.667004   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.667376   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:02.667406   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.667604   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:15:02.668130   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:15:02.668306   40779 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:15:02.668416   40779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:15:02.668453   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:15:02.668558   40779 ssh_runner.go:195] Run: cat /version.json
	I0729 11:15:02.668577   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:15:02.671120   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.671152   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.671553   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:02.671575   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.671611   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:02.671630   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:02.671686   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:15:02.671805   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:15:02.671878   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:15:02.671988   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:15:02.672093   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:15:02.672155   40779 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:15:02.672221   40779 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/multinode-893477/id_rsa Username:docker}
	I0729 11:15:02.672284   40779 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/multinode-893477/id_rsa Username:docker}
	I0729 11:15:02.772774   40779 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 11:15:02.773554   40779 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 11:15:02.773740   40779 ssh_runner.go:195] Run: systemctl --version
	I0729 11:15:02.779982   40779 command_runner.go:130] > systemd 252 (252)
	I0729 11:15:02.780025   40779 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 11:15:02.780248   40779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:15:02.957139   40779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 11:15:02.966206   40779 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 11:15:02.966257   40779 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:15:02.966312   40779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:15:02.978274   40779 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 11:15:02.978303   40779 start.go:495] detecting cgroup driver to use...
	I0729 11:15:02.978382   40779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:15:03.002244   40779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:15:03.018694   40779 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:15:03.018769   40779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:15:03.037073   40779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:15:03.054059   40779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:15:03.222532   40779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:15:03.388931   40779 docker.go:233] disabling docker service ...
	I0729 11:15:03.388992   40779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:15:03.424097   40779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:15:03.444689   40779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:15:03.626482   40779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:15:03.807927   40779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:15:03.822958   40779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:15:03.844914   40779 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 11:15:03.845056   40779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:15:03.845126   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.857188   40779 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:15:03.857259   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.869232   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.880979   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.892730   40779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:15:03.904426   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.915527   40779 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.927146   40779 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:15:03.938527   40779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:15:03.952133   40779 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 11:15:03.952390   40779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:15:03.962895   40779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:15:04.114680   40779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:15:14.030281   40779 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.915537854s)
	I0729 11:15:14.030312   40779 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:15:14.030353   40779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:15:14.035865   40779 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 11:15:14.035887   40779 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 11:15:14.035895   40779 command_runner.go:130] > Device: 0,22	Inode: 1418        Links: 1
	I0729 11:15:14.035901   40779 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 11:15:14.035906   40779 command_runner.go:130] > Access: 2024-07-29 11:15:13.851100415 +0000
	I0729 11:15:14.035912   40779 command_runner.go:130] > Modify: 2024-07-29 11:15:13.851100415 +0000
	I0729 11:15:14.035917   40779 command_runner.go:130] > Change: 2024-07-29 11:15:13.851100415 +0000
	I0729 11:15:14.035921   40779 command_runner.go:130] >  Birth: -
	I0729 11:15:14.036136   40779 start.go:563] Will wait 60s for crictl version
	I0729 11:15:14.036210   40779 ssh_runner.go:195] Run: which crictl
	I0729 11:15:14.041124   40779 command_runner.go:130] > /usr/bin/crictl
	I0729 11:15:14.041341   40779 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:15:14.090159   40779 command_runner.go:130] > Version:  0.1.0
	I0729 11:15:14.090183   40779 command_runner.go:130] > RuntimeName:  cri-o
	I0729 11:15:14.090220   40779 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 11:15:14.090258   40779 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 11:15:14.091815   40779 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:15:14.091881   40779 ssh_runner.go:195] Run: crio --version
	I0729 11:15:14.129483   40779 command_runner.go:130] > crio version 1.29.1
	I0729 11:15:14.129509   40779 command_runner.go:130] > Version:        1.29.1
	I0729 11:15:14.129517   40779 command_runner.go:130] > GitCommit:      unknown
	I0729 11:15:14.129522   40779 command_runner.go:130] > GitCommitDate:  unknown
	I0729 11:15:14.129526   40779 command_runner.go:130] > GitTreeState:   clean
	I0729 11:15:14.129534   40779 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 11:15:14.129538   40779 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 11:15:14.129542   40779 command_runner.go:130] > Compiler:       gc
	I0729 11:15:14.129547   40779 command_runner.go:130] > Platform:       linux/amd64
	I0729 11:15:14.129550   40779 command_runner.go:130] > Linkmode:       dynamic
	I0729 11:15:14.129556   40779 command_runner.go:130] > BuildTags:      
	I0729 11:15:14.129560   40779 command_runner.go:130] >   containers_image_ostree_stub
	I0729 11:15:14.129564   40779 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 11:15:14.129567   40779 command_runner.go:130] >   btrfs_noversion
	I0729 11:15:14.129571   40779 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 11:15:14.129574   40779 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 11:15:14.129578   40779 command_runner.go:130] >   seccomp
	I0729 11:15:14.129584   40779 command_runner.go:130] > LDFlags:          unknown
	I0729 11:15:14.129589   40779 command_runner.go:130] > SeccompEnabled:   true
	I0729 11:15:14.129599   40779 command_runner.go:130] > AppArmorEnabled:  false
	I0729 11:15:14.129712   40779 ssh_runner.go:195] Run: crio --version
	I0729 11:15:14.164380   40779 command_runner.go:130] > crio version 1.29.1
	I0729 11:15:14.164407   40779 command_runner.go:130] > Version:        1.29.1
	I0729 11:15:14.164416   40779 command_runner.go:130] > GitCommit:      unknown
	I0729 11:15:14.164423   40779 command_runner.go:130] > GitCommitDate:  unknown
	I0729 11:15:14.164429   40779 command_runner.go:130] > GitTreeState:   clean
	I0729 11:15:14.164441   40779 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 11:15:14.164447   40779 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 11:15:14.164453   40779 command_runner.go:130] > Compiler:       gc
	I0729 11:15:14.164460   40779 command_runner.go:130] > Platform:       linux/amd64
	I0729 11:15:14.164468   40779 command_runner.go:130] > Linkmode:       dynamic
	I0729 11:15:14.164476   40779 command_runner.go:130] > BuildTags:      
	I0729 11:15:14.164483   40779 command_runner.go:130] >   containers_image_ostree_stub
	I0729 11:15:14.164518   40779 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 11:15:14.164532   40779 command_runner.go:130] >   btrfs_noversion
	I0729 11:15:14.164541   40779 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 11:15:14.164549   40779 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 11:15:14.164556   40779 command_runner.go:130] >   seccomp
	I0729 11:15:14.164563   40779 command_runner.go:130] > LDFlags:          unknown
	I0729 11:15:14.164572   40779 command_runner.go:130] > SeccompEnabled:   true
	I0729 11:15:14.164580   40779 command_runner.go:130] > AppArmorEnabled:  false
	I0729 11:15:14.166476   40779 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:15:14.168014   40779 main.go:141] libmachine: (multinode-893477) Calling .GetIP
	I0729 11:15:14.170837   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:14.171363   40779 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:15:14.171390   40779 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:15:14.171657   40779 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:15:14.176455   40779 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 11:15:14.176572   40779 kubeadm.go:883] updating cluster {Name:multinode-893477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-893477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:15:14.176706   40779 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:15:14.176743   40779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:15:14.225483   40779 command_runner.go:130] > {
	I0729 11:15:14.225509   40779 command_runner.go:130] >   "images": [
	I0729 11:15:14.225513   40779 command_runner.go:130] >     {
	I0729 11:15:14.225521   40779 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 11:15:14.225526   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225531   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 11:15:14.225535   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225539   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225547   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 11:15:14.225554   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 11:15:14.225564   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225570   40779 command_runner.go:130] >       "size": "87165492",
	I0729 11:15:14.225574   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.225578   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.225583   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225591   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225594   40779 command_runner.go:130] >     },
	I0729 11:15:14.225597   40779 command_runner.go:130] >     {
	I0729 11:15:14.225603   40779 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 11:15:14.225607   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225614   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 11:15:14.225618   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225622   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225628   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 11:15:14.225636   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 11:15:14.225639   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225646   40779 command_runner.go:130] >       "size": "87174707",
	I0729 11:15:14.225649   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.225657   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.225663   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225667   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225670   40779 command_runner.go:130] >     },
	I0729 11:15:14.225673   40779 command_runner.go:130] >     {
	I0729 11:15:14.225679   40779 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 11:15:14.225685   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225690   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 11:15:14.225694   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225698   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225704   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 11:15:14.225713   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 11:15:14.225717   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225724   40779 command_runner.go:130] >       "size": "1363676",
	I0729 11:15:14.225727   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.225734   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.225737   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225741   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225745   40779 command_runner.go:130] >     },
	I0729 11:15:14.225750   40779 command_runner.go:130] >     {
	I0729 11:15:14.225756   40779 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 11:15:14.225762   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225767   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 11:15:14.225770   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225779   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225789   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 11:15:14.225801   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 11:15:14.225805   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225810   40779 command_runner.go:130] >       "size": "31470524",
	I0729 11:15:14.225814   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.225818   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.225822   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225826   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225829   40779 command_runner.go:130] >     },
	I0729 11:15:14.225833   40779 command_runner.go:130] >     {
	I0729 11:15:14.225838   40779 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 11:15:14.225843   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225848   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 11:15:14.225854   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225858   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225865   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 11:15:14.225875   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 11:15:14.225880   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225884   40779 command_runner.go:130] >       "size": "61245718",
	I0729 11:15:14.225891   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.225894   40779 command_runner.go:130] >       "username": "nonroot",
	I0729 11:15:14.225898   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225901   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225905   40779 command_runner.go:130] >     },
	I0729 11:15:14.225908   40779 command_runner.go:130] >     {
	I0729 11:15:14.225914   40779 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 11:15:14.225920   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.225924   40779 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 11:15:14.225930   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225934   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.225941   40779 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 11:15:14.225949   40779 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 11:15:14.225955   40779 command_runner.go:130] >       ],
	I0729 11:15:14.225960   40779 command_runner.go:130] >       "size": "150779692",
	I0729 11:15:14.225965   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.225970   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.225975   40779 command_runner.go:130] >       },
	I0729 11:15:14.225979   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.225987   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.225993   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.225996   40779 command_runner.go:130] >     },
	I0729 11:15:14.226002   40779 command_runner.go:130] >     {
	I0729 11:15:14.226008   40779 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 11:15:14.226014   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.226019   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 11:15:14.226024   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226028   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.226035   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 11:15:14.226044   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 11:15:14.226047   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226057   40779 command_runner.go:130] >       "size": "117609954",
	I0729 11:15:14.226063   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.226067   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.226073   40779 command_runner.go:130] >       },
	I0729 11:15:14.226077   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.226083   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.226087   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.226092   40779 command_runner.go:130] >     },
	I0729 11:15:14.226096   40779 command_runner.go:130] >     {
	I0729 11:15:14.226103   40779 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 11:15:14.226108   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.226116   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 11:15:14.226122   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226126   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.226142   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 11:15:14.226153   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 11:15:14.226157   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226161   40779 command_runner.go:130] >       "size": "112198984",
	I0729 11:15:14.226164   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.226168   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.226171   40779 command_runner.go:130] >       },
	I0729 11:15:14.226175   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.226178   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.226182   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.226185   40779 command_runner.go:130] >     },
	I0729 11:15:14.226188   40779 command_runner.go:130] >     {
	I0729 11:15:14.226194   40779 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 11:15:14.226197   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.226202   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 11:15:14.226205   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226209   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.226220   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 11:15:14.226238   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 11:15:14.226243   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226247   40779 command_runner.go:130] >       "size": "85953945",
	I0729 11:15:14.226251   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.226255   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.226259   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.226262   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.226269   40779 command_runner.go:130] >     },
	I0729 11:15:14.226272   40779 command_runner.go:130] >     {
	I0729 11:15:14.226278   40779 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 11:15:14.226284   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.226289   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 11:15:14.226294   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226299   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.226308   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 11:15:14.226317   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 11:15:14.226322   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226327   40779 command_runner.go:130] >       "size": "63051080",
	I0729 11:15:14.226332   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.226337   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.226343   40779 command_runner.go:130] >       },
	I0729 11:15:14.226346   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.226352   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.226356   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.226362   40779 command_runner.go:130] >     },
	I0729 11:15:14.226366   40779 command_runner.go:130] >     {
	I0729 11:15:14.226372   40779 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 11:15:14.226376   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.226382   40779 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 11:15:14.226386   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226392   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.226398   40779 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 11:15:14.226407   40779 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 11:15:14.226412   40779 command_runner.go:130] >       ],
	I0729 11:15:14.226416   40779 command_runner.go:130] >       "size": "750414",
	I0729 11:15:14.226422   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.226426   40779 command_runner.go:130] >         "value": "65535"
	I0729 11:15:14.226431   40779 command_runner.go:130] >       },
	I0729 11:15:14.226435   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.226441   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.226445   40779 command_runner.go:130] >       "pinned": true
	I0729 11:15:14.226451   40779 command_runner.go:130] >     }
	I0729 11:15:14.226454   40779 command_runner.go:130] >   ]
	I0729 11:15:14.226459   40779 command_runner.go:130] > }
	I0729 11:15:14.226613   40779 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:15:14.226624   40779 crio.go:433] Images already preloaded, skipping extraction
	I0729 11:15:14.226679   40779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:15:14.262684   40779 command_runner.go:130] > {
	I0729 11:15:14.262715   40779 command_runner.go:130] >   "images": [
	I0729 11:15:14.262720   40779 command_runner.go:130] >     {
	I0729 11:15:14.262727   40779 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 11:15:14.262732   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.262737   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 11:15:14.262741   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262746   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.262759   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 11:15:14.262766   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 11:15:14.262772   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262776   40779 command_runner.go:130] >       "size": "87165492",
	I0729 11:15:14.262783   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.262787   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.262795   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.262799   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.262802   40779 command_runner.go:130] >     },
	I0729 11:15:14.262807   40779 command_runner.go:130] >     {
	I0729 11:15:14.262815   40779 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 11:15:14.262821   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.262828   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 11:15:14.262836   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262841   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.262853   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 11:15:14.262867   40779 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 11:15:14.262874   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262880   40779 command_runner.go:130] >       "size": "87174707",
	I0729 11:15:14.262886   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.262895   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.262901   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.262908   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.262914   40779 command_runner.go:130] >     },
	I0729 11:15:14.262920   40779 command_runner.go:130] >     {
	I0729 11:15:14.262925   40779 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 11:15:14.262929   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.262940   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 11:15:14.262945   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262954   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.262966   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 11:15:14.262980   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 11:15:14.262985   40779 command_runner.go:130] >       ],
	I0729 11:15:14.262993   40779 command_runner.go:130] >       "size": "1363676",
	I0729 11:15:14.262997   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.263014   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263027   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263031   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263035   40779 command_runner.go:130] >     },
	I0729 11:15:14.263039   40779 command_runner.go:130] >     {
	I0729 11:15:14.263046   40779 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 11:15:14.263050   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263059   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 11:15:14.263065   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263071   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263084   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 11:15:14.263103   40779 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 11:15:14.263110   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263116   40779 command_runner.go:130] >       "size": "31470524",
	I0729 11:15:14.263123   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.263129   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263133   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263138   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263141   40779 command_runner.go:130] >     },
	I0729 11:15:14.263144   40779 command_runner.go:130] >     {
	I0729 11:15:14.263151   40779 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 11:15:14.263161   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263170   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 11:15:14.263179   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263188   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263202   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 11:15:14.263216   40779 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 11:15:14.263225   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263231   40779 command_runner.go:130] >       "size": "61245718",
	I0729 11:15:14.263235   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.263244   40779 command_runner.go:130] >       "username": "nonroot",
	I0729 11:15:14.263254   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263261   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263269   40779 command_runner.go:130] >     },
	I0729 11:15:14.263276   40779 command_runner.go:130] >     {
	I0729 11:15:14.263289   40779 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 11:15:14.263299   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263309   40779 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 11:15:14.263315   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263319   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263332   40779 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 11:15:14.263347   40779 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 11:15:14.263356   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263367   40779 command_runner.go:130] >       "size": "150779692",
	I0729 11:15:14.263375   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.263385   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.263396   40779 command_runner.go:130] >       },
	I0729 11:15:14.263405   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263411   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263417   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263425   40779 command_runner.go:130] >     },
	I0729 11:15:14.263430   40779 command_runner.go:130] >     {
	I0729 11:15:14.263443   40779 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 11:15:14.263451   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263462   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 11:15:14.263471   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263480   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263492   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 11:15:14.263512   40779 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 11:15:14.263521   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263531   40779 command_runner.go:130] >       "size": "117609954",
	I0729 11:15:14.263540   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.263549   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.263557   40779 command_runner.go:130] >       },
	I0729 11:15:14.263566   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263575   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263578   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263582   40779 command_runner.go:130] >     },
	I0729 11:15:14.263588   40779 command_runner.go:130] >     {
	I0729 11:15:14.263598   40779 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 11:15:14.263608   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263619   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 11:15:14.263629   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263638   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263663   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 11:15:14.263677   40779 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 11:15:14.263682   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263689   40779 command_runner.go:130] >       "size": "112198984",
	I0729 11:15:14.263699   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.263706   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.263715   40779 command_runner.go:130] >       },
	I0729 11:15:14.263721   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263730   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263739   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263746   40779 command_runner.go:130] >     },
	I0729 11:15:14.263751   40779 command_runner.go:130] >     {
	I0729 11:15:14.263759   40779 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 11:15:14.263767   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263778   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 11:15:14.263786   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263792   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263805   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 11:15:14.263822   40779 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 11:15:14.263830   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263836   40779 command_runner.go:130] >       "size": "85953945",
	I0729 11:15:14.263843   40779 command_runner.go:130] >       "uid": null,
	I0729 11:15:14.263847   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.263863   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.263869   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.263878   40779 command_runner.go:130] >     },
	I0729 11:15:14.263885   40779 command_runner.go:130] >     {
	I0729 11:15:14.263895   40779 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 11:15:14.263905   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.263916   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 11:15:14.263924   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263931   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.263941   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 11:15:14.263957   40779 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 11:15:14.263967   40779 command_runner.go:130] >       ],
	I0729 11:15:14.263975   40779 command_runner.go:130] >       "size": "63051080",
	I0729 11:15:14.263983   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.263992   40779 command_runner.go:130] >         "value": "0"
	I0729 11:15:14.264000   40779 command_runner.go:130] >       },
	I0729 11:15:14.264009   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.264015   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.264020   40779 command_runner.go:130] >       "pinned": false
	I0729 11:15:14.264027   40779 command_runner.go:130] >     },
	I0729 11:15:14.264034   40779 command_runner.go:130] >     {
	I0729 11:15:14.264047   40779 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 11:15:14.264065   40779 command_runner.go:130] >       "repoTags": [
	I0729 11:15:14.264076   40779 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 11:15:14.264081   40779 command_runner.go:130] >       ],
	I0729 11:15:14.264088   40779 command_runner.go:130] >       "repoDigests": [
	I0729 11:15:14.264099   40779 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 11:15:14.264110   40779 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 11:15:14.264118   40779 command_runner.go:130] >       ],
	I0729 11:15:14.264131   40779 command_runner.go:130] >       "size": "750414",
	I0729 11:15:14.264140   40779 command_runner.go:130] >       "uid": {
	I0729 11:15:14.264149   40779 command_runner.go:130] >         "value": "65535"
	I0729 11:15:14.264158   40779 command_runner.go:130] >       },
	I0729 11:15:14.264166   40779 command_runner.go:130] >       "username": "",
	I0729 11:15:14.264175   40779 command_runner.go:130] >       "spec": null,
	I0729 11:15:14.264182   40779 command_runner.go:130] >       "pinned": true
	I0729 11:15:14.264186   40779 command_runner.go:130] >     }
	I0729 11:15:14.264191   40779 command_runner.go:130] >   ]
	I0729 11:15:14.264199   40779 command_runner.go:130] > }
	I0729 11:15:14.264358   40779 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:15:14.264370   40779 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:15:14.264387   40779 kubeadm.go:934] updating node { 192.168.39.159 8443 v1.30.3 crio true true} ...
	I0729 11:15:14.264548   40779 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-893477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-893477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:15:14.264633   40779 ssh_runner.go:195] Run: crio config
	I0729 11:15:14.298856   40779 command_runner.go:130] ! time="2024-07-29 11:15:14.276448668Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 11:15:14.304984   40779 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 11:15:14.312361   40779 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 11:15:14.312385   40779 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 11:15:14.312407   40779 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 11:15:14.312411   40779 command_runner.go:130] > #
	I0729 11:15:14.312418   40779 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 11:15:14.312424   40779 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 11:15:14.312430   40779 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 11:15:14.312446   40779 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 11:15:14.312450   40779 command_runner.go:130] > # reload'.
	I0729 11:15:14.312456   40779 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 11:15:14.312464   40779 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 11:15:14.312471   40779 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 11:15:14.312479   40779 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 11:15:14.312482   40779 command_runner.go:130] > [crio]
	I0729 11:15:14.312491   40779 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 11:15:14.312496   40779 command_runner.go:130] > # containers images, in this directory.
	I0729 11:15:14.312502   40779 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 11:15:14.312512   40779 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 11:15:14.312519   40779 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 11:15:14.312526   40779 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 11:15:14.312532   40779 command_runner.go:130] > # imagestore = ""
	I0729 11:15:14.312538   40779 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 11:15:14.312545   40779 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 11:15:14.312550   40779 command_runner.go:130] > storage_driver = "overlay"
	I0729 11:15:14.312557   40779 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 11:15:14.312563   40779 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 11:15:14.312571   40779 command_runner.go:130] > storage_option = [
	I0729 11:15:14.312576   40779 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 11:15:14.312579   40779 command_runner.go:130] > ]
	I0729 11:15:14.312587   40779 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 11:15:14.312593   40779 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 11:15:14.312599   40779 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 11:15:14.312604   40779 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 11:15:14.312612   40779 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 11:15:14.312619   40779 command_runner.go:130] > # always happen on a node reboot
	I0729 11:15:14.312623   40779 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 11:15:14.312634   40779 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 11:15:14.312642   40779 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 11:15:14.312653   40779 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 11:15:14.312660   40779 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 11:15:14.312667   40779 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 11:15:14.312677   40779 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 11:15:14.312683   40779 command_runner.go:130] > # internal_wipe = true
	I0729 11:15:14.312691   40779 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 11:15:14.312698   40779 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 11:15:14.312703   40779 command_runner.go:130] > # internal_repair = false
	I0729 11:15:14.312710   40779 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 11:15:14.312715   40779 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 11:15:14.312722   40779 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 11:15:14.312727   40779 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 11:15:14.312736   40779 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 11:15:14.312742   40779 command_runner.go:130] > [crio.api]
	I0729 11:15:14.312747   40779 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 11:15:14.312751   40779 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 11:15:14.312758   40779 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 11:15:14.312763   40779 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 11:15:14.312771   40779 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 11:15:14.312778   40779 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 11:15:14.312784   40779 command_runner.go:130] > # stream_port = "0"
	I0729 11:15:14.312789   40779 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 11:15:14.312794   40779 command_runner.go:130] > # stream_enable_tls = false
	I0729 11:15:14.312800   40779 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 11:15:14.312806   40779 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 11:15:14.312816   40779 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 11:15:14.312827   40779 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 11:15:14.312831   40779 command_runner.go:130] > # minutes.
	I0729 11:15:14.312835   40779 command_runner.go:130] > # stream_tls_cert = ""
	I0729 11:15:14.312840   40779 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 11:15:14.312848   40779 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 11:15:14.312852   40779 command_runner.go:130] > # stream_tls_key = ""
	I0729 11:15:14.312858   40779 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 11:15:14.312866   40779 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 11:15:14.312885   40779 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 11:15:14.312893   40779 command_runner.go:130] > # stream_tls_ca = ""
	I0729 11:15:14.312906   40779 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 11:15:14.312913   40779 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 11:15:14.312924   40779 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 11:15:14.312931   40779 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 11:15:14.312937   40779 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 11:15:14.312944   40779 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 11:15:14.312948   40779 command_runner.go:130] > [crio.runtime]
	I0729 11:15:14.312955   40779 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 11:15:14.312961   40779 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 11:15:14.312966   40779 command_runner.go:130] > # "nofile=1024:2048"
	I0729 11:15:14.312972   40779 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 11:15:14.312978   40779 command_runner.go:130] > # default_ulimits = [
	I0729 11:15:14.312981   40779 command_runner.go:130] > # ]
	I0729 11:15:14.312988   40779 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 11:15:14.312993   40779 command_runner.go:130] > # no_pivot = false
	I0729 11:15:14.312999   40779 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 11:15:14.313006   40779 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 11:15:14.313013   40779 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 11:15:14.313019   40779 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 11:15:14.313026   40779 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 11:15:14.313032   40779 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 11:15:14.313038   40779 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 11:15:14.313042   40779 command_runner.go:130] > # Cgroup setting for conmon
	I0729 11:15:14.313051   40779 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 11:15:14.313057   40779 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 11:15:14.313069   40779 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 11:15:14.313076   40779 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 11:15:14.313085   40779 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 11:15:14.313090   40779 command_runner.go:130] > conmon_env = [
	I0729 11:15:14.313095   40779 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 11:15:14.313101   40779 command_runner.go:130] > ]
	I0729 11:15:14.313106   40779 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 11:15:14.313111   40779 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 11:15:14.313119   40779 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 11:15:14.313122   40779 command_runner.go:130] > # default_env = [
	I0729 11:15:14.313128   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313137   40779 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 11:15:14.313146   40779 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 11:15:14.313152   40779 command_runner.go:130] > # selinux = false
	I0729 11:15:14.313158   40779 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 11:15:14.313166   40779 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 11:15:14.313171   40779 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 11:15:14.313176   40779 command_runner.go:130] > # seccomp_profile = ""
	I0729 11:15:14.313182   40779 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 11:15:14.313189   40779 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 11:15:14.313194   40779 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 11:15:14.313200   40779 command_runner.go:130] > # which might increase security.
	I0729 11:15:14.313205   40779 command_runner.go:130] > # This option is currently deprecated,
	I0729 11:15:14.313213   40779 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 11:15:14.313217   40779 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 11:15:14.313223   40779 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 11:15:14.313230   40779 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 11:15:14.313236   40779 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 11:15:14.313244   40779 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 11:15:14.313249   40779 command_runner.go:130] > # This option supports live configuration reload.
	I0729 11:15:14.313256   40779 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 11:15:14.313261   40779 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 11:15:14.313268   40779 command_runner.go:130] > # the cgroup blockio controller.
	I0729 11:15:14.313272   40779 command_runner.go:130] > # blockio_config_file = ""
	I0729 11:15:14.313280   40779 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 11:15:14.313286   40779 command_runner.go:130] > # blockio parameters.
	I0729 11:15:14.313290   40779 command_runner.go:130] > # blockio_reload = false
	I0729 11:15:14.313298   40779 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 11:15:14.313302   40779 command_runner.go:130] > # irqbalance daemon.
	I0729 11:15:14.313307   40779 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 11:15:14.313317   40779 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 11:15:14.313326   40779 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 11:15:14.313334   40779 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 11:15:14.313341   40779 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 11:15:14.313347   40779 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 11:15:14.313354   40779 command_runner.go:130] > # This option supports live configuration reload.
	I0729 11:15:14.313358   40779 command_runner.go:130] > # rdt_config_file = ""
	I0729 11:15:14.313367   40779 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 11:15:14.313374   40779 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 11:15:14.313402   40779 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 11:15:14.313409   40779 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 11:15:14.313415   40779 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 11:15:14.313421   40779 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 11:15:14.313427   40779 command_runner.go:130] > # will be added.
	I0729 11:15:14.313431   40779 command_runner.go:130] > # default_capabilities = [
	I0729 11:15:14.313437   40779 command_runner.go:130] > # 	"CHOWN",
	I0729 11:15:14.313441   40779 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 11:15:14.313446   40779 command_runner.go:130] > # 	"FSETID",
	I0729 11:15:14.313450   40779 command_runner.go:130] > # 	"FOWNER",
	I0729 11:15:14.313456   40779 command_runner.go:130] > # 	"SETGID",
	I0729 11:15:14.313459   40779 command_runner.go:130] > # 	"SETUID",
	I0729 11:15:14.313465   40779 command_runner.go:130] > # 	"SETPCAP",
	I0729 11:15:14.313470   40779 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 11:15:14.313475   40779 command_runner.go:130] > # 	"KILL",
	I0729 11:15:14.313478   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313487   40779 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 11:15:14.313495   40779 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 11:15:14.313502   40779 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 11:15:14.313508   40779 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 11:15:14.313516   40779 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 11:15:14.313520   40779 command_runner.go:130] > default_sysctls = [
	I0729 11:15:14.313527   40779 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 11:15:14.313531   40779 command_runner.go:130] > ]
	I0729 11:15:14.313538   40779 command_runner.go:130] > # List of devices on the host that a
	I0729 11:15:14.313546   40779 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 11:15:14.313552   40779 command_runner.go:130] > # allowed_devices = [
	I0729 11:15:14.313555   40779 command_runner.go:130] > # 	"/dev/fuse",
	I0729 11:15:14.313561   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313566   40779 command_runner.go:130] > # List of additional devices. specified as
	I0729 11:15:14.313574   40779 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 11:15:14.313581   40779 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 11:15:14.313589   40779 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 11:15:14.313595   40779 command_runner.go:130] > # additional_devices = [
	I0729 11:15:14.313602   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313609   40779 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 11:15:14.313613   40779 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 11:15:14.313618   40779 command_runner.go:130] > # 	"/etc/cdi",
	I0729 11:15:14.313622   40779 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 11:15:14.313627   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313633   40779 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 11:15:14.313641   40779 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 11:15:14.313647   40779 command_runner.go:130] > # Defaults to false.
	I0729 11:15:14.313652   40779 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 11:15:14.313659   40779 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 11:15:14.313667   40779 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 11:15:14.313671   40779 command_runner.go:130] > # hooks_dir = [
	I0729 11:15:14.313677   40779 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 11:15:14.313681   40779 command_runner.go:130] > # ]
	I0729 11:15:14.313689   40779 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 11:15:14.313697   40779 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 11:15:14.313702   40779 command_runner.go:130] > # its default mounts from the following two files:
	I0729 11:15:14.313707   40779 command_runner.go:130] > #
	I0729 11:15:14.313713   40779 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 11:15:14.313721   40779 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 11:15:14.313727   40779 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 11:15:14.313732   40779 command_runner.go:130] > #
	I0729 11:15:14.313738   40779 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 11:15:14.313746   40779 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 11:15:14.313752   40779 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 11:15:14.313758   40779 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 11:15:14.313762   40779 command_runner.go:130] > #
	I0729 11:15:14.313768   40779 command_runner.go:130] > # default_mounts_file = ""
	I0729 11:15:14.313773   40779 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 11:15:14.313781   40779 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 11:15:14.313787   40779 command_runner.go:130] > pids_limit = 1024
	I0729 11:15:14.313793   40779 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 11:15:14.313801   40779 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 11:15:14.313806   40779 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 11:15:14.313818   40779 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 11:15:14.313829   40779 command_runner.go:130] > # log_size_max = -1
	I0729 11:15:14.313838   40779 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 11:15:14.313844   40779 command_runner.go:130] > # log_to_journald = false
	I0729 11:15:14.313851   40779 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 11:15:14.313856   40779 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 11:15:14.313863   40779 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 11:15:14.313868   40779 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 11:15:14.313873   40779 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 11:15:14.313877   40779 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 11:15:14.313882   40779 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 11:15:14.313889   40779 command_runner.go:130] > # read_only = false
	I0729 11:15:14.313894   40779 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 11:15:14.313900   40779 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 11:15:14.313906   40779 command_runner.go:130] > # live configuration reload.
	I0729 11:15:14.313910   40779 command_runner.go:130] > # log_level = "info"
	I0729 11:15:14.313917   40779 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 11:15:14.313924   40779 command_runner.go:130] > # This option supports live configuration reload.
	I0729 11:15:14.313928   40779 command_runner.go:130] > # log_filter = ""
	I0729 11:15:14.313935   40779 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 11:15:14.313943   40779 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 11:15:14.313949   40779 command_runner.go:130] > # separated by comma.
	I0729 11:15:14.313956   40779 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 11:15:14.313961   40779 command_runner.go:130] > # uid_mappings = ""
	I0729 11:15:14.313967   40779 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 11:15:14.313975   40779 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 11:15:14.313981   40779 command_runner.go:130] > # separated by comma.
	I0729 11:15:14.313988   40779 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 11:15:14.313993   40779 command_runner.go:130] > # gid_mappings = ""
	I0729 11:15:14.313999   40779 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 11:15:14.314007   40779 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 11:15:14.314015   40779 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 11:15:14.314024   40779 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 11:15:14.314030   40779 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 11:15:14.314036   40779 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 11:15:14.314043   40779 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 11:15:14.314051   40779 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 11:15:14.314066   40779 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 11:15:14.314075   40779 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 11:15:14.314081   40779 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 11:15:14.314088   40779 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 11:15:14.314095   40779 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 11:15:14.314101   40779 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 11:15:14.314106   40779 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 11:15:14.314111   40779 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 11:15:14.314117   40779 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 11:15:14.314122   40779 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 11:15:14.314128   40779 command_runner.go:130] > drop_infra_ctr = false
	I0729 11:15:14.314135   40779 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 11:15:14.314143   40779 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 11:15:14.314151   40779 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 11:15:14.314157   40779 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 11:15:14.314164   40779 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 11:15:14.314172   40779 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 11:15:14.314177   40779 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 11:15:14.314184   40779 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 11:15:14.314187   40779 command_runner.go:130] > # shared_cpuset = ""
	I0729 11:15:14.314195   40779 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 11:15:14.314200   40779 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 11:15:14.314205   40779 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 11:15:14.314213   40779 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 11:15:14.314220   40779 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 11:15:14.314225   40779 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 11:15:14.314233   40779 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 11:15:14.314237   40779 command_runner.go:130] > # enable_criu_support = false
	I0729 11:15:14.314242   40779 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 11:15:14.314250   40779 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 11:15:14.314257   40779 command_runner.go:130] > # enable_pod_events = false
	I0729 11:15:14.314262   40779 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 11:15:14.314270   40779 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 11:15:14.314276   40779 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 11:15:14.314281   40779 command_runner.go:130] > # default_runtime = "runc"
	I0729 11:15:14.314286   40779 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 11:15:14.314298   40779 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 11:15:14.314308   40779 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 11:15:14.314317   40779 command_runner.go:130] > # creation as a file is not desired either.
	I0729 11:15:14.314326   40779 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 11:15:14.314333   40779 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 11:15:14.314338   40779 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 11:15:14.314341   40779 command_runner.go:130] > # ]
	I0729 11:15:14.314348   40779 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 11:15:14.314356   40779 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 11:15:14.314362   40779 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 11:15:14.314369   40779 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 11:15:14.314371   40779 command_runner.go:130] > #
	I0729 11:15:14.314376   40779 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 11:15:14.314383   40779 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 11:15:14.314405   40779 command_runner.go:130] > # runtime_type = "oci"
	I0729 11:15:14.314411   40779 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 11:15:14.314416   40779 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 11:15:14.314422   40779 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 11:15:14.314426   40779 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 11:15:14.314432   40779 command_runner.go:130] > # monitor_env = []
	I0729 11:15:14.314437   40779 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 11:15:14.314443   40779 command_runner.go:130] > # allowed_annotations = []
	I0729 11:15:14.314449   40779 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 11:15:14.314454   40779 command_runner.go:130] > # Where:
	I0729 11:15:14.314459   40779 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 11:15:14.314465   40779 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 11:15:14.314472   40779 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 11:15:14.314478   40779 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 11:15:14.314484   40779 command_runner.go:130] > #   in $PATH.
	I0729 11:15:14.314490   40779 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 11:15:14.314497   40779 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 11:15:14.314503   40779 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 11:15:14.314508   40779 command_runner.go:130] > #   state.
	I0729 11:15:14.314514   40779 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 11:15:14.314521   40779 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 11:15:14.314527   40779 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 11:15:14.314535   40779 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 11:15:14.314543   40779 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 11:15:14.314549   40779 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 11:15:14.314558   40779 command_runner.go:130] > #   The currently recognized values are:
	I0729 11:15:14.314564   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 11:15:14.314573   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 11:15:14.314581   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 11:15:14.314587   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 11:15:14.314596   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 11:15:14.314604   40779 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 11:15:14.314612   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 11:15:14.314620   40779 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 11:15:14.314628   40779 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 11:15:14.314634   40779 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 11:15:14.314640   40779 command_runner.go:130] > #   deprecated option "conmon".
	I0729 11:15:14.314646   40779 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 11:15:14.314653   40779 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 11:15:14.314659   40779 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 11:15:14.314671   40779 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 11:15:14.314679   40779 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 11:15:14.314686   40779 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 11:15:14.314692   40779 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 11:15:14.314712   40779 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 11:15:14.314718   40779 command_runner.go:130] > #
	I0729 11:15:14.314728   40779 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 11:15:14.314733   40779 command_runner.go:130] > #
	I0729 11:15:14.314742   40779 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 11:15:14.314750   40779 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 11:15:14.314756   40779 command_runner.go:130] > #
	I0729 11:15:14.314762   40779 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 11:15:14.314770   40779 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 11:15:14.314774   40779 command_runner.go:130] > #
	I0729 11:15:14.314780   40779 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 11:15:14.314786   40779 command_runner.go:130] > # feature.
	I0729 11:15:14.314789   40779 command_runner.go:130] > #
	I0729 11:15:14.314797   40779 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 11:15:14.314803   40779 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 11:15:14.314813   40779 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 11:15:14.314823   40779 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 11:15:14.314831   40779 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 11:15:14.314835   40779 command_runner.go:130] > #
	I0729 11:15:14.314841   40779 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 11:15:14.314849   40779 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 11:15:14.314852   40779 command_runner.go:130] > #
	I0729 11:15:14.314858   40779 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 11:15:14.314866   40779 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 11:15:14.314869   40779 command_runner.go:130] > #
	I0729 11:15:14.314875   40779 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 11:15:14.314882   40779 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 11:15:14.314885   40779 command_runner.go:130] > # limitation.
	I0729 11:15:14.314893   40779 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 11:15:14.314900   40779 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 11:15:14.314904   40779 command_runner.go:130] > runtime_type = "oci"
	I0729 11:15:14.314909   40779 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 11:15:14.314912   40779 command_runner.go:130] > runtime_config_path = ""
	I0729 11:15:14.314919   40779 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 11:15:14.314923   40779 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 11:15:14.314927   40779 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 11:15:14.314933   40779 command_runner.go:130] > monitor_env = [
	I0729 11:15:14.314939   40779 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 11:15:14.314943   40779 command_runner.go:130] > ]
	I0729 11:15:14.314948   40779 command_runner.go:130] > privileged_without_host_devices = false
	I0729 11:15:14.314956   40779 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 11:15:14.314963   40779 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 11:15:14.314969   40779 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 11:15:14.314978   40779 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 11:15:14.314987   40779 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 11:15:14.314995   40779 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 11:15:14.315004   40779 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 11:15:14.315013   40779 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 11:15:14.315019   40779 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 11:15:14.315026   40779 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 11:15:14.315030   40779 command_runner.go:130] > # Example:
	I0729 11:15:14.315034   40779 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 11:15:14.315039   40779 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 11:15:14.315046   40779 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 11:15:14.315050   40779 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 11:15:14.315054   40779 command_runner.go:130] > # cpuset = 0
	I0729 11:15:14.315057   40779 command_runner.go:130] > # cpushares = "0-1"
	I0729 11:15:14.315063   40779 command_runner.go:130] > # Where:
	I0729 11:15:14.315067   40779 command_runner.go:130] > # The workload name is workload-type.
	I0729 11:15:14.315073   40779 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 11:15:14.315078   40779 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 11:15:14.315083   40779 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 11:15:14.315091   40779 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 11:15:14.315098   40779 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 11:15:14.315103   40779 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 11:15:14.315109   40779 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 11:15:14.315115   40779 command_runner.go:130] > # Default value is set to true
	I0729 11:15:14.315120   40779 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 11:15:14.315127   40779 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 11:15:14.315134   40779 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 11:15:14.315138   40779 command_runner.go:130] > # Default value is set to 'false'
	I0729 11:15:14.315145   40779 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 11:15:14.315151   40779 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 11:15:14.315157   40779 command_runner.go:130] > #
	I0729 11:15:14.315163   40779 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 11:15:14.315171   40779 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 11:15:14.315179   40779 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 11:15:14.315187   40779 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 11:15:14.315195   40779 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 11:15:14.315198   40779 command_runner.go:130] > [crio.image]
	I0729 11:15:14.315206   40779 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 11:15:14.315210   40779 command_runner.go:130] > # default_transport = "docker://"
	I0729 11:15:14.315218   40779 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 11:15:14.315226   40779 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 11:15:14.315232   40779 command_runner.go:130] > # global_auth_file = ""
	I0729 11:15:14.315237   40779 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 11:15:14.315245   40779 command_runner.go:130] > # This option supports live configuration reload.
	I0729 11:15:14.315251   40779 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 11:15:14.315257   40779 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 11:15:14.315266   40779 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 11:15:14.315271   40779 command_runner.go:130] > # This option supports live configuration reload.
	I0729 11:15:14.315277   40779 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 11:15:14.315284   40779 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 11:15:14.315290   40779 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 11:15:14.315297   40779 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 11:15:14.315306   40779 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 11:15:14.315310   40779 command_runner.go:130] > # pause_command = "/pause"
	I0729 11:15:14.315318   40779 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 11:15:14.315325   40779 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 11:15:14.315333   40779 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 11:15:14.315344   40779 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 11:15:14.315352   40779 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 11:15:14.315360   40779 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 11:15:14.315364   40779 command_runner.go:130] > # pinned_images = [
	I0729 11:15:14.315370   40779 command_runner.go:130] > # ]
	I0729 11:15:14.315376   40779 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 11:15:14.315384   40779 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 11:15:14.315391   40779 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 11:15:14.315399   40779 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 11:15:14.315406   40779 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 11:15:14.315412   40779 command_runner.go:130] > # signature_policy = ""
	I0729 11:15:14.315417   40779 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 11:15:14.315425   40779 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 11:15:14.315433   40779 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 11:15:14.315440   40779 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 11:15:14.315447   40779 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 11:15:14.315451   40779 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 11:15:14.315459   40779 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 11:15:14.315467   40779 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 11:15:14.315473   40779 command_runner.go:130] > # changing them here.
	I0729 11:15:14.315477   40779 command_runner.go:130] > # insecure_registries = [
	I0729 11:15:14.315482   40779 command_runner.go:130] > # ]
	I0729 11:15:14.315489   40779 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 11:15:14.315497   40779 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 11:15:14.315503   40779 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 11:15:14.315508   40779 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 11:15:14.315514   40779 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 11:15:14.315522   40779 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 11:15:14.315528   40779 command_runner.go:130] > # CNI plugins.
	I0729 11:15:14.315531   40779 command_runner.go:130] > [crio.network]
	I0729 11:15:14.315539   40779 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 11:15:14.315544   40779 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 11:15:14.315550   40779 command_runner.go:130] > # cni_default_network = ""
	I0729 11:15:14.315556   40779 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 11:15:14.315564   40779 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 11:15:14.315569   40779 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 11:15:14.315575   40779 command_runner.go:130] > # plugin_dirs = [
	I0729 11:15:14.315578   40779 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 11:15:14.315583   40779 command_runner.go:130] > # ]
	I0729 11:15:14.315589   40779 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 11:15:14.315595   40779 command_runner.go:130] > [crio.metrics]
	I0729 11:15:14.315600   40779 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 11:15:14.315606   40779 command_runner.go:130] > enable_metrics = true
	I0729 11:15:14.315610   40779 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 11:15:14.315616   40779 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 11:15:14.315623   40779 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 11:15:14.315630   40779 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 11:15:14.315636   40779 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 11:15:14.315641   40779 command_runner.go:130] > # metrics_collectors = [
	I0729 11:15:14.315645   40779 command_runner.go:130] > # 	"operations",
	I0729 11:15:14.315652   40779 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 11:15:14.315656   40779 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 11:15:14.315661   40779 command_runner.go:130] > # 	"operations_errors",
	I0729 11:15:14.315665   40779 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 11:15:14.315671   40779 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 11:15:14.315675   40779 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 11:15:14.315681   40779 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 11:15:14.315685   40779 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 11:15:14.315693   40779 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 11:15:14.315697   40779 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 11:15:14.315703   40779 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 11:15:14.315707   40779 command_runner.go:130] > # 	"containers_oom_total",
	I0729 11:15:14.315713   40779 command_runner.go:130] > # 	"containers_oom",
	I0729 11:15:14.315717   40779 command_runner.go:130] > # 	"processes_defunct",
	I0729 11:15:14.315723   40779 command_runner.go:130] > # 	"operations_total",
	I0729 11:15:14.315728   40779 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 11:15:14.315734   40779 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 11:15:14.315738   40779 command_runner.go:130] > # 	"operations_errors_total",
	I0729 11:15:14.315744   40779 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 11:15:14.315748   40779 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 11:15:14.315755   40779 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 11:15:14.315759   40779 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 11:15:14.315768   40779 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 11:15:14.315774   40779 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 11:15:14.315778   40779 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 11:15:14.315785   40779 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 11:15:14.315788   40779 command_runner.go:130] > # ]
	I0729 11:15:14.315793   40779 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 11:15:14.315799   40779 command_runner.go:130] > # metrics_port = 9090
	I0729 11:15:14.315803   40779 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 11:15:14.315807   40779 command_runner.go:130] > # metrics_socket = ""
	I0729 11:15:14.315815   40779 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 11:15:14.315820   40779 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 11:15:14.315829   40779 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 11:15:14.315833   40779 command_runner.go:130] > # certificate on any modification event.
	I0729 11:15:14.315839   40779 command_runner.go:130] > # metrics_cert = ""
	I0729 11:15:14.315844   40779 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 11:15:14.315850   40779 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 11:15:14.315854   40779 command_runner.go:130] > # metrics_key = ""
	I0729 11:15:14.315860   40779 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 11:15:14.315864   40779 command_runner.go:130] > [crio.tracing]
	I0729 11:15:14.315870   40779 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 11:15:14.315876   40779 command_runner.go:130] > # enable_tracing = false
	I0729 11:15:14.315881   40779 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 11:15:14.315888   40779 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 11:15:14.315895   40779 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 11:15:14.315899   40779 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 11:15:14.315903   40779 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 11:15:14.315906   40779 command_runner.go:130] > [crio.nri]
	I0729 11:15:14.315914   40779 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 11:15:14.315920   40779 command_runner.go:130] > # enable_nri = false
	I0729 11:15:14.315924   40779 command_runner.go:130] > # NRI socket to listen on.
	I0729 11:15:14.315930   40779 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 11:15:14.315934   40779 command_runner.go:130] > # NRI plugin directory to use.
	I0729 11:15:14.315941   40779 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 11:15:14.315946   40779 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 11:15:14.315953   40779 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 11:15:14.315959   40779 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 11:15:14.315965   40779 command_runner.go:130] > # nri_disable_connections = false
	I0729 11:15:14.315970   40779 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 11:15:14.315976   40779 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 11:15:14.315981   40779 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 11:15:14.315988   40779 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 11:15:14.315994   40779 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 11:15:14.315999   40779 command_runner.go:130] > [crio.stats]
	I0729 11:15:14.316008   40779 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 11:15:14.316014   40779 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 11:15:14.316019   40779 command_runner.go:130] > # stats_collection_period = 0
	I0729 11:15:14.316139   40779 cni.go:84] Creating CNI manager for ""
	I0729 11:15:14.316151   40779 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 11:15:14.316162   40779 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:15:14.316187   40779 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-893477 NodeName:multinode-893477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:15:14.316320   40779 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-893477"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:15:14.316377   40779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:15:14.326348   40779 command_runner.go:130] > kubeadm
	I0729 11:15:14.326364   40779 command_runner.go:130] > kubectl
	I0729 11:15:14.326368   40779 command_runner.go:130] > kubelet
	I0729 11:15:14.326385   40779 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:15:14.326432   40779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:15:14.335914   40779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 11:15:14.353920   40779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:15:14.371873   40779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 11:15:14.389947   40779 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I0729 11:15:14.400249   40779 command_runner.go:130] > 192.168.39.159	control-plane.minikube.internal
	I0729 11:15:14.400780   40779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:15:14.554867   40779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:15:14.571677   40779 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477 for IP: 192.168.39.159
	I0729 11:15:14.571716   40779 certs.go:194] generating shared ca certs ...
	I0729 11:15:14.571739   40779 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:15:14.572028   40779 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:15:14.572076   40779 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:15:14.572095   40779 certs.go:256] generating profile certs ...
	I0729 11:15:14.572184   40779 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/client.key
	I0729 11:15:14.572249   40779 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/apiserver.key.f37b8ebe
	I0729 11:15:14.572285   40779 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/proxy-client.key
	I0729 11:15:14.572295   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 11:15:14.572306   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 11:15:14.572318   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 11:15:14.572331   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 11:15:14.572343   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 11:15:14.572355   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 11:15:14.572367   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 11:15:14.572379   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 11:15:14.572439   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:15:14.572467   40779 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:15:14.572477   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:15:14.572533   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:15:14.572560   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:15:14.572586   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:15:14.572623   40779 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:15:14.572652   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> /usr/share/ca-certificates/110642.pem
	I0729 11:15:14.572665   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:15:14.572679   40779 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem -> /usr/share/ca-certificates/11064.pem
	I0729 11:15:14.573340   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:15:14.601525   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:15:14.627572   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:15:14.654432   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:15:14.680755   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 11:15:14.706732   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:15:14.733250   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:15:14.758963   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/multinode-893477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:15:14.784772   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:15:14.811288   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:15:14.836042   40779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:15:14.861228   40779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:15:14.880464   40779 ssh_runner.go:195] Run: openssl version
	I0729 11:15:14.886602   40779 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 11:15:14.886680   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:15:14.897952   40779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:15:14.902459   40779 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:15:14.902508   40779 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:15:14.902556   40779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:15:14.908053   40779 command_runner.go:130] > 3ec20f2e
	I0729 11:15:14.908225   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:15:14.917531   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:15:14.928671   40779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:15:14.933271   40779 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:15:14.933307   40779 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:15:14.933348   40779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:15:14.938728   40779 command_runner.go:130] > b5213941
	I0729 11:15:14.938860   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:15:14.948422   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:15:14.960192   40779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:15:14.964961   40779 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:15:14.965073   40779 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:15:14.965131   40779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:15:14.971088   40779 command_runner.go:130] > 51391683
	I0729 11:15:14.971256   40779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:15:14.981240   40779 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:15:14.986226   40779 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:15:14.986249   40779 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 11:15:14.986255   40779 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0729 11:15:14.986261   40779 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 11:15:14.986268   40779 command_runner.go:130] > Access: 2024-07-29 11:08:01.009237606 +0000
	I0729 11:15:14.986275   40779 command_runner.go:130] > Modify: 2024-07-29 11:08:01.009237606 +0000
	I0729 11:15:14.986281   40779 command_runner.go:130] > Change: 2024-07-29 11:08:01.009237606 +0000
	I0729 11:15:14.986288   40779 command_runner.go:130] >  Birth: 2024-07-29 11:08:01.009237606 +0000
	I0729 11:15:14.986368   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:15:14.992266   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:14.992424   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:15:14.998036   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:14.998217   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:15:15.003883   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:15.004126   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:15:15.009785   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:15.009854   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:15:15.015445   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:15.015580   40779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:15:15.021498   40779 command_runner.go:130] > Certificate will not expire
	I0729 11:15:15.021575   40779 kubeadm.go:392] StartCluster: {Name:multinode-893477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-893477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.186 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:15:15.021711   40779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:15:15.021781   40779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:15:15.069049   40779 command_runner.go:130] > 431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a
	I0729 11:15:15.069085   40779 command_runner.go:130] > e2275ef3de0527c1700a65468ea19e03300aff678da1429f9f469630c64ca2b3
	I0729 11:15:15.069094   40779 command_runner.go:130] > df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31
	I0729 11:15:15.069112   40779 command_runner.go:130] > 29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b
	I0729 11:15:15.069122   40779 command_runner.go:130] > d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f
	I0729 11:15:15.069131   40779 command_runner.go:130] > 7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc
	I0729 11:15:15.069140   40779 command_runner.go:130] > eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb
	I0729 11:15:15.069152   40779 command_runner.go:130] > 8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309
	I0729 11:15:15.069162   40779 command_runner.go:130] > 15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f
	I0729 11:15:15.069189   40779 cri.go:89] found id: "431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a"
	I0729 11:15:15.069201   40779 cri.go:89] found id: "e2275ef3de0527c1700a65468ea19e03300aff678da1429f9f469630c64ca2b3"
	I0729 11:15:15.069207   40779 cri.go:89] found id: "df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31"
	I0729 11:15:15.069216   40779 cri.go:89] found id: "29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b"
	I0729 11:15:15.069224   40779 cri.go:89] found id: "d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f"
	I0729 11:15:15.069230   40779 cri.go:89] found id: "7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc"
	I0729 11:15:15.069239   40779 cri.go:89] found id: "eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb"
	I0729 11:15:15.069247   40779 cri.go:89] found id: "8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309"
	I0729 11:15:15.069252   40779 cri.go:89] found id: "15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f"
	I0729 11:15:15.069264   40779 cri.go:89] found id: ""
	I0729 11:15:15.069319   40779 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.678527075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1bd92f61-b7d4-4a65-9cf7-7e338f82a4e2 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.679593204Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a23ce702-d45f-4712-bfb3-dfa29b282846 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.680147506Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722251966680124653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a23ce702-d45f-4712-bfb3-dfa29b282846 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.680885981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50da0c81-2cff-4ec1-b077-586c0ff75fab name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.680944178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50da0c81-2cff-4ec1-b077-586c0ff75fab name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.681290956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b10e2ac7fb746b7117c59621a2788d2d1144f4f8c95d91a879ac2ba66faab3f7,PodSandboxId:e7943e0a40a146abda5d199c07edbe8884cd90ba7cf86ad7e92f674bed5b144f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722251755206578093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470c02518ecee49412e9b9eed91d3c4d4af412868b459293da7635e07e146987,PodSandboxId:c66fb57672e37bc31fbb10ce705d175ccfef7b37539f4d58cc305a9b7ae9e501,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722251721688049335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3042fde14486b705b28da130ff41d9ac7c75131a17d505e38bfef6bf811550bc,PodSandboxId:ec088cad8f62e673fbbd4a1b870ed19f6c9007df61431b83d936338722f8612e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722251721599497611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ee6daa9db0558fccaaadbeaa6a763cb5d6b97ae7ac8c158994f837e5c25e33,PodSandboxId:efe7df772093aca4c3aecf8895741b03d5754787c05f7b8b9ac843cb6ee61623,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722251721499115295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},An
notations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4199b2f23de6b892f3ca152bbba39d826902dd855e6940c3898b8631c1dcc33a,PodSandboxId:f519bf5dbbf07d2858e003fca9c1280b5e93b5fa996cc90389a6b46299cbf723,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722251721440522033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adc266ca6fcf70f19b195a4ca13985c07d8ecc72c0be87de39850cd9d4cdbb6,PodSandboxId:0be8accd1ddf2ffe9e4af84c17472af9658b6e8383806a29f23f1464fc267bad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722251717721146762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9a342efce14c9d8,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f55feb77a82b2a13d3cf50733a2c7b2634187f93114a9e63d48a4c7c8983f9,PodSandboxId:372138dd092b599374fbee256aaf32a5032ce5ad1f2cbf0bf269cb0353cba131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722251717746905820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff88030177c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78
f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cade466a5bed94e877889779a34023013cf91f3b64b0ebadfbb0bd2afed3ad,PodSandboxId:e61c14c0e97abb34d0322738c6cff9bc37e0ff825200d8c6919725a9fe39bf5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722251717724372822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:map[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b645b366743a37de89c3a43ff6d94b7a323259dce39ec144aa1db107fc0f1d64,PodSandboxId:c083d38f7905e9d535798499678481a381faccd541bc97b632cdd38f87f1a56a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722251717602039528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a,PodSandboxId:1e316fe6f649d7b60ccc7aa524ac9079f6184a0b2551e0563080fac19e7acb69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722251703554414414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db46dd5cb41579f410c6229df194fdd536bfe3d3b9408eaaf9dc6f39e6625122,PodSandboxId:a6429c897a26758c7edb0bb3194ce24f4c6a7536779a68c2d85ed45f34c0f8dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722251381911711280,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31,PodSandboxId:aa7afdbbe82af6f01b4b3a65150edb1fa82e143183d00b1b3c86bb4cc7c63ea5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722251321011905054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},Annotations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b,PodSandboxId:4ae79f01a6ff4a4b4602739c68fa9d99d830375bacb4cc19b9a5bb1b2dc752d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722251308940286613,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f,PodSandboxId:4e49bfac1b2cca75d79e16c175bd751ff1e25a2fc4f6d9e7003e737fddf0391c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722251305540324772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.kubernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc,PodSandboxId:30c1d8f8284ba195c506a74da35fd657b8aee0a436d9fc910b8767177e514445,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722251284922178090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff8803017
7c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb,PodSandboxId:fc794e0326fb4c50ece301cac670bd866c3d713cb969d00c08215e3825048079,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722251284848727943,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9
a342efce14c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f,PodSandboxId:04173721ae531cee94345d37f21955b5adcc71e70e8b043c7a439aeb5f5c555b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722251284809903498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309,PodSandboxId:89e865ac40d062b1017359a3a72b064bc280e9fdb4cb4c06a083cd7d7d9c0b33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722251284811315992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50da0c81-2cff-4ec1-b077-586c0ff75fab name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.706705882Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=09faf9d3-0e53-4f84-b19a-27f7ead7bf97 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.707184402Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e7943e0a40a146abda5d199c07edbe8884cd90ba7cf86ad7e92f674bed5b144f,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-mq79l,Uid:35d18e03-abcd-4771-921d-4f3e02d2e156,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722251755076169247,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:15:20.915918812Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ec088cad8f62e673fbbd4a1b870ed19f6c9007df61431b83d936338722f8612e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4sc9b,Uid:4b82e9c0-f851-46b5-880b-60e698c16330,Namespace:kube-system,Attempt:2,}
,State:SANDBOX_READY,CreatedAt:1722251721299492991,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:15:20.915920187Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c66fb57672e37bc31fbb10ce705d175ccfef7b37539f4d58cc305a9b7ae9e501,Metadata:&PodSandboxMetadata{Name:kindnet-52h82,Uid:71930213-e103-46cc-8f0e-6f6574c5dd81,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722251721272711023,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-07-29T11:15:20.915922929Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:efe7df772093aca4c3aecf8895741b03d5754787c05f7b8b9ac843cb6ee61623,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d8307277-6866-4782-af18-0b3af40c2684,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722251721240201539,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{
\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T11:15:20.915916713Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f519bf5dbbf07d2858e003fca9c1280b5e93b5fa996cc90389a6b46299cbf723,Metadata:&PodSandboxMetadata{Name:kube-proxy-hmnwn,Uid:5442195f-18ec-4f12-b044-8959420929e0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722251721235112617,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:15:20.915926218Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0be8accd1ddf2ffe9e4af84c17472af9658b6e8383806a29f23f1464fc267bad,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-893477,Uid:8a0629e58e85d7d6c9a342efce14c9d8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722251717463499932,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9a342efce14c9d8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8a0629e58e85d7d6c9a342efce14c9d8,kubernetes.io/config.seen: 2024-07-29T11:15:16.908537094Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c083d38f7905e9d535798499678481a381faccd541bc97b632cdd38f87f1a56a,Metadata:&PodSandboxMetadat
a{Name:kube-scheduler-multinode-893477,Uid:140a8600d3623acabc21e27d313a56b2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722251717462200603,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 140a8600d3623acabc21e27d313a56b2,kubernetes.io/config.seen: 2024-07-29T11:15:16.908537873Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e61c14c0e97abb34d0322738c6cff9bc37e0ff825200d8c6919725a9fe39bf5d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-893477,Uid:badbd5e616450f4bd6b31620460f4621,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722251717460711560,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-8
93477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.159:8443,kubernetes.io/config.hash: badbd5e616450f4bd6b31620460f4621,kubernetes.io/config.seen: 2024-07-29T11:15:16.908536143Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:372138dd092b599374fbee256aaf32a5032ce5ad1f2cbf0bf269cb0353cba131,Metadata:&PodSandboxMetadata{Name:etcd-multinode-893477,Uid:e2682c22d77c2a5909cbbff88030177c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722251717459151877,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff88030177c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.159:2379,kuberne
tes.io/config.hash: e2682c22d77c2a5909cbbff88030177c,kubernetes.io/config.seen: 2024-07-29T11:15:16.908532184Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1e316fe6f649d7b60ccc7aa524ac9079f6184a0b2551e0563080fac19e7acb69,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4sc9b,Uid:4b82e9c0-f851-46b5-880b-60e698c16330,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722251703347848034,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:08:40.553979439Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a6429c897a26758c7edb0bb3194ce24f4c6a7536779a68c2d85ed45f34c0f8dc,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-mq79l,Uid:35d18e03-abcd-4771-921d-4f3e02d2e156,Namespace:def
ault,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722251378884357660,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:09:38.575325293Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aa7afdbbe82af6f01b4b3a65150edb1fa82e143183d00b1b3c86bb4cc7c63ea5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d8307277-6866-4782-af18-0b3af40c2684,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722251320870276177,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},Annotations:map[
string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T11:08:40.559185966Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e49bfac1b2cca75d79e16c175bd751ff1e25a2fc4f6d9e7003e737fddf0391c,Metadata:&PodSandboxMetadata{Name:kube-proxy-hmnwn,Uid:5442195f-18ec-4f12-b044-8959420929e0,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722251305450833582,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:08:23.641127890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4ae79f01a6ff4a4b4602739c68fa9d99d830375bacb4cc19b9a5bb1b2dc752d9,Metadata:&PodSandboxMetadata{Name:kindnet-52h82,Uid:71930213-e103-46cc-8f0e-6f6574c5dd81,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722251304836986161,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:08:23.625342609Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc794e0326fb4c50ece301cac670bd866c3d713cb969d00c08215e3825048079,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-893477,Uid:8a0629e58e85d7d6c9a342efce14c9d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722251284642221124,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9a342efce14c9d8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8a0629e58e85d7d6c9a342efce14c9d8,kubernetes.io/config.seen: 2024-07-29T11:08:04.162913532Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:04173721ae531cee94345d37f21955b5adcc71e70e8b043c7a439aeb5f5c555b,Metadat
a:&PodSandboxMetadata{Name:kube-scheduler-multinode-893477,Uid:140a8600d3623acabc21e27d313a56b2,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722251284628435250,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 140a8600d3623acabc21e27d313a56b2,kubernetes.io/config.seen: 2024-07-29T11:08:04.162914446Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:89e865ac40d062b1017359a3a72b064bc280e9fdb4cb4c06a083cd7d7d9c0b33,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-893477,Uid:badbd5e616450f4bd6b31620460f4621,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722251284622341083,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.159:8443,kubernetes.io/config.hash: badbd5e616450f4bd6b31620460f4621,kubernetes.io/config.seen: 2024-07-29T11:08:04.162912381Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:30c1d8f8284ba195c506a74da35fd657b8aee0a436d9fc910b8767177e514445,Metadata:&PodSandboxMetadata{Name:etcd-multinode-893477,Uid:e2682c22d77c2a5909cbbff88030177c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722251284617514341,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff88030177c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https:
//192.168.39.159:2379,kubernetes.io/config.hash: e2682c22d77c2a5909cbbff88030177c,kubernetes.io/config.seen: 2024-07-29T11:08:04.162908553Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=09faf9d3-0e53-4f84-b19a-27f7ead7bf97 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.707945149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fe3e802-b366-4580-8109-350650c5f89a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.708002875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fe3e802-b366-4580-8109-350650c5f89a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.708361921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b10e2ac7fb746b7117c59621a2788d2d1144f4f8c95d91a879ac2ba66faab3f7,PodSandboxId:e7943e0a40a146abda5d199c07edbe8884cd90ba7cf86ad7e92f674bed5b144f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722251755206578093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470c02518ecee49412e9b9eed91d3c4d4af412868b459293da7635e07e146987,PodSandboxId:c66fb57672e37bc31fbb10ce705d175ccfef7b37539f4d58cc305a9b7ae9e501,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722251721688049335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3042fde14486b705b28da130ff41d9ac7c75131a17d505e38bfef6bf811550bc,PodSandboxId:ec088cad8f62e673fbbd4a1b870ed19f6c9007df61431b83d936338722f8612e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722251721599497611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ee6daa9db0558fccaaadbeaa6a763cb5d6b97ae7ac8c158994f837e5c25e33,PodSandboxId:efe7df772093aca4c3aecf8895741b03d5754787c05f7b8b9ac843cb6ee61623,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722251721499115295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},An
notations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4199b2f23de6b892f3ca152bbba39d826902dd855e6940c3898b8631c1dcc33a,PodSandboxId:f519bf5dbbf07d2858e003fca9c1280b5e93b5fa996cc90389a6b46299cbf723,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722251721440522033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adc266ca6fcf70f19b195a4ca13985c07d8ecc72c0be87de39850cd9d4cdbb6,PodSandboxId:0be8accd1ddf2ffe9e4af84c17472af9658b6e8383806a29f23f1464fc267bad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722251717721146762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9a342efce14c9d8,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f55feb77a82b2a13d3cf50733a2c7b2634187f93114a9e63d48a4c7c8983f9,PodSandboxId:372138dd092b599374fbee256aaf32a5032ce5ad1f2cbf0bf269cb0353cba131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722251717746905820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff88030177c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78
f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cade466a5bed94e877889779a34023013cf91f3b64b0ebadfbb0bd2afed3ad,PodSandboxId:e61c14c0e97abb34d0322738c6cff9bc37e0ff825200d8c6919725a9fe39bf5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722251717724372822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:map[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b645b366743a37de89c3a43ff6d94b7a323259dce39ec144aa1db107fc0f1d64,PodSandboxId:c083d38f7905e9d535798499678481a381faccd541bc97b632cdd38f87f1a56a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722251717602039528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a,PodSandboxId:1e316fe6f649d7b60ccc7aa524ac9079f6184a0b2551e0563080fac19e7acb69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722251703554414414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db46dd5cb41579f410c6229df194fdd536bfe3d3b9408eaaf9dc6f39e6625122,PodSandboxId:a6429c897a26758c7edb0bb3194ce24f4c6a7536779a68c2d85ed45f34c0f8dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722251381911711280,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31,PodSandboxId:aa7afdbbe82af6f01b4b3a65150edb1fa82e143183d00b1b3c86bb4cc7c63ea5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722251321011905054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},Annotations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b,PodSandboxId:4ae79f01a6ff4a4b4602739c68fa9d99d830375bacb4cc19b9a5bb1b2dc752d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722251308940286613,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f,PodSandboxId:4e49bfac1b2cca75d79e16c175bd751ff1e25a2fc4f6d9e7003e737fddf0391c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722251305540324772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.kubernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc,PodSandboxId:30c1d8f8284ba195c506a74da35fd657b8aee0a436d9fc910b8767177e514445,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722251284922178090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff8803017
7c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb,PodSandboxId:fc794e0326fb4c50ece301cac670bd866c3d713cb969d00c08215e3825048079,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722251284848727943,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9
a342efce14c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f,PodSandboxId:04173721ae531cee94345d37f21955b5adcc71e70e8b043c7a439aeb5f5c555b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722251284809903498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309,PodSandboxId:89e865ac40d062b1017359a3a72b064bc280e9fdb4cb4c06a083cd7d7d9c0b33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722251284811315992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fe3e802-b366-4580-8109-350650c5f89a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.728977381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8332d06a-445f-4b43-9ff0-e492dde58a89 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.729050265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8332d06a-445f-4b43-9ff0-e492dde58a89 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.730192409Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29fd288f-4796-4409-84cd-7e92d3d3f8ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.730599739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722251966730579144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29fd288f-4796-4409-84cd-7e92d3d3f8ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.731624230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b93b793-a855-449b-82a9-e5825f021c13 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.731801633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b93b793-a855-449b-82a9-e5825f021c13 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.732381774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b10e2ac7fb746b7117c59621a2788d2d1144f4f8c95d91a879ac2ba66faab3f7,PodSandboxId:e7943e0a40a146abda5d199c07edbe8884cd90ba7cf86ad7e92f674bed5b144f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722251755206578093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470c02518ecee49412e9b9eed91d3c4d4af412868b459293da7635e07e146987,PodSandboxId:c66fb57672e37bc31fbb10ce705d175ccfef7b37539f4d58cc305a9b7ae9e501,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722251721688049335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3042fde14486b705b28da130ff41d9ac7c75131a17d505e38bfef6bf811550bc,PodSandboxId:ec088cad8f62e673fbbd4a1b870ed19f6c9007df61431b83d936338722f8612e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722251721599497611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ee6daa9db0558fccaaadbeaa6a763cb5d6b97ae7ac8c158994f837e5c25e33,PodSandboxId:efe7df772093aca4c3aecf8895741b03d5754787c05f7b8b9ac843cb6ee61623,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722251721499115295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},An
notations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4199b2f23de6b892f3ca152bbba39d826902dd855e6940c3898b8631c1dcc33a,PodSandboxId:f519bf5dbbf07d2858e003fca9c1280b5e93b5fa996cc90389a6b46299cbf723,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722251721440522033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adc266ca6fcf70f19b195a4ca13985c07d8ecc72c0be87de39850cd9d4cdbb6,PodSandboxId:0be8accd1ddf2ffe9e4af84c17472af9658b6e8383806a29f23f1464fc267bad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722251717721146762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9a342efce14c9d8,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f55feb77a82b2a13d3cf50733a2c7b2634187f93114a9e63d48a4c7c8983f9,PodSandboxId:372138dd092b599374fbee256aaf32a5032ce5ad1f2cbf0bf269cb0353cba131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722251717746905820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff88030177c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78
f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cade466a5bed94e877889779a34023013cf91f3b64b0ebadfbb0bd2afed3ad,PodSandboxId:e61c14c0e97abb34d0322738c6cff9bc37e0ff825200d8c6919725a9fe39bf5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722251717724372822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:map[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b645b366743a37de89c3a43ff6d94b7a323259dce39ec144aa1db107fc0f1d64,PodSandboxId:c083d38f7905e9d535798499678481a381faccd541bc97b632cdd38f87f1a56a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722251717602039528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a,PodSandboxId:1e316fe6f649d7b60ccc7aa524ac9079f6184a0b2551e0563080fac19e7acb69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722251703554414414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db46dd5cb41579f410c6229df194fdd536bfe3d3b9408eaaf9dc6f39e6625122,PodSandboxId:a6429c897a26758c7edb0bb3194ce24f4c6a7536779a68c2d85ed45f34c0f8dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722251381911711280,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31,PodSandboxId:aa7afdbbe82af6f01b4b3a65150edb1fa82e143183d00b1b3c86bb4cc7c63ea5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722251321011905054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},Annotations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b,PodSandboxId:4ae79f01a6ff4a4b4602739c68fa9d99d830375bacb4cc19b9a5bb1b2dc752d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722251308940286613,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f,PodSandboxId:4e49bfac1b2cca75d79e16c175bd751ff1e25a2fc4f6d9e7003e737fddf0391c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722251305540324772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.kubernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc,PodSandboxId:30c1d8f8284ba195c506a74da35fd657b8aee0a436d9fc910b8767177e514445,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722251284922178090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff8803017
7c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb,PodSandboxId:fc794e0326fb4c50ece301cac670bd866c3d713cb969d00c08215e3825048079,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722251284848727943,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9
a342efce14c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f,PodSandboxId:04173721ae531cee94345d37f21955b5adcc71e70e8b043c7a439aeb5f5c555b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722251284809903498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309,PodSandboxId:89e865ac40d062b1017359a3a72b064bc280e9fdb4cb4c06a083cd7d7d9c0b33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722251284811315992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b93b793-a855-449b-82a9-e5825f021c13 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.783060338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac055f55-81f8-4fb9-9588-0652fe2349d5 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.783150532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac055f55-81f8-4fb9-9588-0652fe2349d5 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.785204317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b1364ea-e0fa-4abb-ad4a-877fa822fa02 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.785659685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722251966785637581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b1364ea-e0fa-4abb-ad4a-877fa822fa02 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.786387908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cca09c4-bdef-4bee-8793-9845a5b8ea35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.786451300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cca09c4-bdef-4bee-8793-9845a5b8ea35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:19:26 multinode-893477 crio[3012]: time="2024-07-29 11:19:26.786881590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b10e2ac7fb746b7117c59621a2788d2d1144f4f8c95d91a879ac2ba66faab3f7,PodSandboxId:e7943e0a40a146abda5d199c07edbe8884cd90ba7cf86ad7e92f674bed5b144f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722251755206578093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470c02518ecee49412e9b9eed91d3c4d4af412868b459293da7635e07e146987,PodSandboxId:c66fb57672e37bc31fbb10ce705d175ccfef7b37539f4d58cc305a9b7ae9e501,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722251721688049335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3042fde14486b705b28da130ff41d9ac7c75131a17d505e38bfef6bf811550bc,PodSandboxId:ec088cad8f62e673fbbd4a1b870ed19f6c9007df61431b83d936338722f8612e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722251721599497611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ee6daa9db0558fccaaadbeaa6a763cb5d6b97ae7ac8c158994f837e5c25e33,PodSandboxId:efe7df772093aca4c3aecf8895741b03d5754787c05f7b8b9ac843cb6ee61623,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722251721499115295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},An
notations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4199b2f23de6b892f3ca152bbba39d826902dd855e6940c3898b8631c1dcc33a,PodSandboxId:f519bf5dbbf07d2858e003fca9c1280b5e93b5fa996cc90389a6b46299cbf723,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722251721440522033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adc266ca6fcf70f19b195a4ca13985c07d8ecc72c0be87de39850cd9d4cdbb6,PodSandboxId:0be8accd1ddf2ffe9e4af84c17472af9658b6e8383806a29f23f1464fc267bad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722251717721146762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9a342efce14c9d8,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f55feb77a82b2a13d3cf50733a2c7b2634187f93114a9e63d48a4c7c8983f9,PodSandboxId:372138dd092b599374fbee256aaf32a5032ce5ad1f2cbf0bf269cb0353cba131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722251717746905820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff88030177c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78
f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cade466a5bed94e877889779a34023013cf91f3b64b0ebadfbb0bd2afed3ad,PodSandboxId:e61c14c0e97abb34d0322738c6cff9bc37e0ff825200d8c6919725a9fe39bf5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722251717724372822,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:map[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b645b366743a37de89c3a43ff6d94b7a323259dce39ec144aa1db107fc0f1d64,PodSandboxId:c083d38f7905e9d535798499678481a381faccd541bc97b632cdd38f87f1a56a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722251717602039528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a,PodSandboxId:1e316fe6f649d7b60ccc7aa524ac9079f6184a0b2551e0563080fac19e7acb69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722251703554414414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4sc9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82e9c0-f851-46b5-880b-60e698c16330,},Annotations:map[string]string{io.kubernetes.container.hash: c01525f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db46dd5cb41579f410c6229df194fdd536bfe3d3b9408eaaf9dc6f39e6625122,PodSandboxId:a6429c897a26758c7edb0bb3194ce24f4c6a7536779a68c2d85ed45f34c0f8dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722251381911711280,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mq79l,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35d18e03-abcd-4771-921d-4f3e02d2e156,},Annotations:map[string]string{io.kubernetes.container.hash: 70ab4bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7717af6e7e7faaca1a2f2a31ed9420d99e1a1d6faaec942ede4f2c8ff8ad31,PodSandboxId:aa7afdbbe82af6f01b4b3a65150edb1fa82e143183d00b1b3c86bb4cc7c63ea5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722251321011905054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d8307277-6866-4782-af18-0b3af40c2684,},Annotations:map[string]string{io.kubernetes.container.hash: 567f98be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b,PodSandboxId:4ae79f01a6ff4a4b4602739c68fa9d99d830375bacb4cc19b9a5bb1b2dc752d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722251308940286613,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-52h82,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71930213-e103-46cc-8f0e-6f6574c5dd81,},Annotations:map[string]string{io.kubernetes.container.hash: 74ef82e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f,PodSandboxId:4e49bfac1b2cca75d79e16c175bd751ff1e25a2fc4f6d9e7003e737fddf0391c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722251305540324772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmnwn,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5442195f-18ec-4f12-b044-8959420929e0,},Annotations:map[string]string{io.kubernetes.container.hash: 561ace81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc,PodSandboxId:30c1d8f8284ba195c506a74da35fd657b8aee0a436d9fc910b8767177e514445,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722251284922178090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2682c22d77c2a5909cbbff8803017
7c,},Annotations:map[string]string{io.kubernetes.container.hash: d10e78f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb,PodSandboxId:fc794e0326fb4c50ece301cac670bd866c3d713cb969d00c08215e3825048079,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722251284848727943,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0629e58e85d7d6c9
a342efce14c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f,PodSandboxId:04173721ae531cee94345d37f21955b5adcc71e70e8b043c7a439aeb5f5c555b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722251284809903498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 140a8600d3623acabc21e27d313a56b2,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309,PodSandboxId:89e865ac40d062b1017359a3a72b064bc280e9fdb4cb4c06a083cd7d7d9c0b33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722251284811315992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-893477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badbd5e616450f4bd6b31620460f4621,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d26cf06f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cca09c4-bdef-4bee-8793-9845a5b8ea35 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b10e2ac7fb746       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   e7943e0a40a14       busybox-fc5497c4f-mq79l
	470c02518ecee       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   c66fb57672e37       kindnet-52h82
	3042fde14486b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   2                   ec088cad8f62e       coredns-7db6d8ff4d-4sc9b
	c9ee6daa9db05       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   efe7df772093a       storage-provisioner
	4199b2f23de6b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   f519bf5dbbf07       kube-proxy-hmnwn
	77f55feb77a82       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   372138dd092b5       etcd-multinode-893477
	60cade466a5be       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   e61c14c0e97ab       kube-apiserver-multinode-893477
	1adc266ca6fcf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   0be8accd1ddf2       kube-controller-manager-multinode-893477
	b645b366743a3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   c083d38f7905e       kube-scheduler-multinode-893477
	431fdf08bfd04       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   1e316fe6f649d       coredns-7db6d8ff4d-4sc9b
	db46dd5cb4157       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   a6429c897a267       busybox-fc5497c4f-mq79l
	df7717af6e7e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   aa7afdbbe82af       storage-provisioner
	29597d58a1c6d       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   4ae79f01a6ff4       kindnet-52h82
	d0df24cda44f0       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      11 minutes ago      Exited              kube-proxy                0                   4e49bfac1b2cc       kube-proxy-hmnwn
	7f788b5e98ba1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   30c1d8f8284ba       etcd-multinode-893477
	eeb0db57c1689       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   fc794e0326fb4       kube-controller-manager-multinode-893477
	8ae2b946a03fe       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   89e865ac40d06       kube-apiserver-multinode-893477
	15dc31d0fa5d3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   04173721ae531       kube-scheduler-multinode-893477
	
	
	==> coredns [3042fde14486b705b28da130ff41d9ac7c75131a17d505e38bfef6bf811550bc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51901 - 53317 "HINFO IN 6022383364451738498.608849142257830461. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.018547434s
	
	
	==> coredns [431fdf08bfd04e14c50908516847a57b7b1774c9aee56d1f51587e3363f5531a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:58028 - 53751 "HINFO IN 6936568639892992830.7635242098144383862. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015195349s
	
	
	==> describe nodes <==
	Name:               multinode-893477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-893477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=multinode-893477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_08_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:08:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-893477
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:19:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:15:20 +0000   Mon, 29 Jul 2024 11:08:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:15:20 +0000   Mon, 29 Jul 2024 11:08:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:15:20 +0000   Mon, 29 Jul 2024 11:08:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:15:20 +0000   Mon, 29 Jul 2024 11:08:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    multinode-893477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db0f71eb93f141508bb8d5922f75b2cf
	  System UUID:                db0f71eb-93f1-4150-8bb8-d5922f75b2cf
	  Boot ID:                    2f70c706-9750-4256-aa63-11f58a74942c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mq79l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m49s
	  kube-system                 coredns-7db6d8ff4d-4sc9b                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-multinode-893477                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-52h82                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-multinode-893477             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-893477    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-hmnwn                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-multinode-893477             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node multinode-893477 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node multinode-893477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node multinode-893477 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           11m                    node-controller  Node multinode-893477 event: Registered Node multinode-893477 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-893477 status is now: NodeReady
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m11s)  kubelet          Node multinode-893477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m11s)  kubelet          Node multinode-893477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m11s)  kubelet          Node multinode-893477 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node multinode-893477 event: Registered Node multinode-893477 in Controller
	
	
	Name:               multinode-893477-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-893477-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=multinode-893477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_16_01_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:16:00 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-893477-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:17:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 11:16:31 +0000   Mon, 29 Jul 2024 11:17:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 11:16:31 +0000   Mon, 29 Jul 2024 11:17:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 11:16:31 +0000   Mon, 29 Jul 2024 11:17:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 11:16:31 +0000   Mon, 29 Jul 2024 11:17:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    multinode-893477-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aed3968240d448e1afb6fa6c397e5660
	  System UUID:                aed39682-40d4-48e1-afb6-fa6c397e5660
	  Boot ID:                    b92c34db-d281-4862-b264-22c951ce0f87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tgnfq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 kindnet-hcg5s              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-ppbjw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)      kubelet          Node multinode-893477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)      kubelet          Node multinode-893477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)      kubelet          Node multinode-893477-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m51s                  kubelet          Node multinode-893477-m02 status is now: NodeReady
	  Normal  Starting                 3m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m27s (x2 over 3m27s)  kubelet          Node multinode-893477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x2 over 3m27s)  kubelet          Node multinode-893477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x2 over 3m27s)  kubelet          Node multinode-893477-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m7s                   kubelet          Node multinode-893477-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-893477-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060265] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.173136] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.147204] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.298975] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.180101] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[Jul29 11:08] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.067055] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.996647] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.086721] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.221283] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.450257] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +5.597061] kauditd_printk_skb: 51 callbacks suppressed
	[Jul29 11:09] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 11:15] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.157920] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +0.241157] systemd-fstab-generator[2900]: Ignoring "noauto" option for root device
	[  +0.166965] systemd-fstab-generator[2948]: Ignoring "noauto" option for root device
	[  +0.330517] systemd-fstab-generator[2992]: Ignoring "noauto" option for root device
	[ +10.437723] systemd-fstab-generator[3119]: Ignoring "noauto" option for root device
	[  +0.089624] kauditd_printk_skb: 110 callbacks suppressed
	[  +2.142360] systemd-fstab-generator[3243]: Ignoring "noauto" option for root device
	[  +4.673588] kauditd_printk_skb: 76 callbacks suppressed
	[ +12.782833] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.432316] systemd-fstab-generator[4095]: Ignoring "noauto" option for root device
	[ +18.588695] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [77f55feb77a82b2a13d3cf50733a2c7b2634187f93114a9e63d48a4c7c8983f9] <==
	{"level":"info","ts":"2024-07-29T11:15:18.206868Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T11:15:18.207046Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T11:15:18.20923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af switched to configuration voters=(17361235931841906351)"}
	{"level":"info","ts":"2024-07-29T11:15:18.211891Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","added-peer-id":"f0ef8018a32f46af","added-peer-peer-urls":["https://192.168.39.159:2380"]}
	{"level":"info","ts":"2024-07-29T11:15:18.212049Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:15:18.212097Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:15:18.218004Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T11:15:18.218316Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f0ef8018a32f46af","initial-advertise-peer-urls":["https://192.168.39.159:2380"],"listen-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.159:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:15:18.218415Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T11:15:18.218565Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-07-29T11:15:18.218626Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-07-29T11:15:19.362166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T11:15:19.362283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:15:19.362342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgPreVoteResp from f0ef8018a32f46af at term 2"}
	{"level":"info","ts":"2024-07-29T11:15:19.362378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T11:15:19.362403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgVoteResp from f0ef8018a32f46af at term 3"}
	{"level":"info","ts":"2024-07-29T11:15:19.362429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became leader at term 3"}
	{"level":"info","ts":"2024-07-29T11:15:19.362458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0ef8018a32f46af elected leader f0ef8018a32f46af at term 3"}
	{"level":"info","ts":"2024-07-29T11:15:19.36783Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f0ef8018a32f46af","local-member-attributes":"{Name:multinode-893477 ClientURLs:[https://192.168.39.159:2379]}","request-path":"/0/members/f0ef8018a32f46af/attributes","cluster-id":"bc02953927cca850","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:15:19.36813Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:15:19.370424Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T11:15:19.372212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:15:19.372456Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:15:19.372498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:15:19.374138Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.159:2379"}
	
	
	==> etcd [7f788b5e98ba1c8a675d9e98dbe21056b66e25d26547dbd70119d2b7d37357dc] <==
	{"level":"info","ts":"2024-07-29T11:08:06.012731Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:08:06.012695Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:08:06.014622Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.159:2379"}
	{"level":"info","ts":"2024-07-29T11:08:06.015246Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:08:06.015285Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:08:06.016645Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T11:09:15.679347Z","caller":"traceutil/trace.go:171","msg":"trace[471705015] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"194.085273ms","start":"2024-07-29T11:09:15.485219Z","end":"2024-07-29T11:09:15.679305Z","steps":["trace[471705015] 'process raft request'  (duration: 134.369645ms)","trace[471705015] 'compare'  (duration: 59.427068ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:09:15.680149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.534948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T11:09:15.682593Z","caller":"traceutil/trace.go:171","msg":"trace[800302879] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:450; }","duration":"103.066004ms","start":"2024-07-29T11:09:15.579511Z","end":"2024-07-29T11:09:15.682577Z","steps":["trace[800302879] 'agreement among raft nodes before linearized reading'  (duration: 100.546029ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:09:16.542398Z","caller":"traceutil/trace.go:171","msg":"trace[1899739747] linearizableReadLoop","detail":"{readStateIndex:475; appliedIndex:474; }","duration":"239.916032ms","start":"2024-07-29T11:09:16.302459Z","end":"2024-07-29T11:09:16.542375Z","steps":["trace[1899739747] 'read index received'  (duration: 201.573517ms)","trace[1899739747] 'applied index is now lower than readState.Index'  (duration: 38.335624ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:09:16.542589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.111596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-l8gpm\" ","response":"range_response_count:1 size:1337"}
	{"level":"info","ts":"2024-07-29T11:09:16.542646Z","caller":"traceutil/trace.go:171","msg":"trace[1963257127] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-l8gpm; range_end:; response_count:1; response_revision:452; }","duration":"240.19757ms","start":"2024-07-29T11:09:16.302438Z","end":"2024-07-29T11:09:16.542635Z","steps":["trace[1963257127] 'agreement among raft nodes before linearized reading'  (duration: 240.024991ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:09:16.54281Z","caller":"traceutil/trace.go:171","msg":"trace[1893070044] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"240.749192ms","start":"2024-07-29T11:09:16.302047Z","end":"2024-07-29T11:09:16.542796Z","steps":["trace[1893070044] 'process raft request'  (duration: 202.060587ms)","trace[1893070044] 'compare'  (duration: 38.104159ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T11:10:14.915844Z","caller":"traceutil/trace.go:171","msg":"trace[1124866864] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"225.163358ms","start":"2024-07-29T11:10:14.690649Z","end":"2024-07-29T11:10:14.915813Z","steps":["trace[1124866864] 'process raft request'  (duration: 223.452659ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:10:14.916184Z","caller":"traceutil/trace.go:171","msg":"trace[2123130020] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"166.643621ms","start":"2024-07-29T11:10:14.749528Z","end":"2024-07-29T11:10:14.916172Z","steps":["trace[2123130020] 'process raft request'  (duration: 166.154863ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:13:31.697703Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T11:13:31.697904Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-893477","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"]}
	{"level":"warn","ts":"2024-07-29T11:13:31.698037Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:13:31.698141Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:13:31.761237Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.159:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:13:31.761333Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.159:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T11:13:31.761423Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f0ef8018a32f46af","current-leader-member-id":"f0ef8018a32f46af"}
	{"level":"info","ts":"2024-07-29T11:13:31.76397Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-07-29T11:13:31.764114Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-07-29T11:13:31.76414Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-893477","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"]}
	
	
	==> kernel <==
	 11:19:27 up 11 min,  0 users,  load average: 0.09, 0.22, 0.14
	Linux multinode-893477 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [29597d58a1c6df0fa5261205dad6c97ae9a48a09726837f10f957cdb9627c67b] <==
	I0729 11:12:50.023278       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	I0729 11:13:00.015231       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:13:00.015307       1 main.go:299] handling current node
	I0729 11:13:00.015331       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:13:00.015338       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:13:00.015502       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:13:00.015557       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	I0729 11:13:10.018516       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:13:10.018568       1 main.go:299] handling current node
	I0729 11:13:10.018600       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:13:10.018610       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:13:10.018868       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:13:10.018884       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	I0729 11:13:20.023340       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:13:20.023467       1 main.go:299] handling current node
	I0729 11:13:20.023506       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:13:20.023526       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:13:20.023865       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:13:20.023903       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	I0729 11:13:30.014999       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:13:30.015048       1 main.go:299] handling current node
	I0729 11:13:30.015064       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:13:30.015070       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:13:30.015204       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0729 11:13:30.015209       1 main.go:322] Node multinode-893477-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [470c02518ecee49412e9b9eed91d3c4d4af412868b459293da7635e07e146987] <==
	I0729 11:18:22.726994       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:18:32.727897       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:18:32.728042       1 main.go:299] handling current node
	I0729 11:18:32.728088       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:18:32.728107       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:18:42.736146       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:18:42.736440       1 main.go:299] handling current node
	I0729 11:18:42.736492       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:18:42.736517       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:18:52.726305       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:18:52.726419       1 main.go:299] handling current node
	I0729 11:18:52.726448       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:18:52.726467       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:19:02.732615       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:19:02.732732       1 main.go:299] handling current node
	I0729 11:19:02.732801       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:19:02.732809       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:19:12.726331       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:19:12.726460       1 main.go:299] handling current node
	I0729 11:19:12.726506       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:19:12.726533       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	I0729 11:19:22.726904       1 main.go:295] Handling node with IPs: map[192.168.39.159:{}]
	I0729 11:19:22.726961       1 main.go:299] handling current node
	I0729 11:19:22.726980       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0729 11:19:22.726985       1 main.go:322] Node multinode-893477-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [60cade466a5bed94e877889779a34023013cf91f3b64b0ebadfbb0bd2afed3ad] <==
	I0729 11:15:20.780830       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 11:15:20.790290       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 11:15:20.790330       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 11:15:20.790420       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 11:15:20.791100       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 11:15:20.796361       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 11:15:20.797246       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 11:15:20.798872       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 11:15:20.799257       1 aggregator.go:165] initial CRD sync complete...
	I0729 11:15:20.800093       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 11:15:20.800136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 11:15:20.800162       1 cache.go:39] Caches are synced for autoregister controller
	E0729 11:15:20.815318       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 11:15:20.822542       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 11:15:20.825406       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 11:15:20.825450       1 policy_source.go:224] refreshing policies
	I0729 11:15:20.857311       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 11:15:21.708892       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 11:15:22.870518       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 11:15:23.015524       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 11:15:23.027933       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 11:15:23.114516       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 11:15:23.121444       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 11:15:34.074026       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:15:34.102369       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [8ae2b946a03fe26603b6586732eb3d98a75a41a1dda15710649b2b4ca9901309] <==
	E0729 11:13:31.713436       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0729 11:13:31.713946       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 11:13:31.714936       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 11:13:31.715030       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0729 11:13:31.715077       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 11:13:31.715114       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0729 11:13:31.715518       1 controller.go:157] Shutting down quota evaluator
	I0729 11:13:31.715565       1 controller.go:176] quota evaluator worker shutdown
	I0729 11:13:31.720190       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 11:13:31.720862       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 11:13:31.726238       1 controller.go:176] quota evaluator worker shutdown
	I0729 11:13:31.726251       1 controller.go:176] quota evaluator worker shutdown
	I0729 11:13:31.726255       1 controller.go:176] quota evaluator worker shutdown
	I0729 11:13:31.726259       1 controller.go:176] quota evaluator worker shutdown
	I0729 11:13:31.731326       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	W0729 11:13:31.732419       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 11:13:31.734290       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0729 11:13:31.736323       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.736721       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.736858       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.736913       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.737023       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.737116       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.737177       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:13:31.737224       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1adc266ca6fcf70f19b195a4ca13985c07d8ecc72c0be87de39850cd9d4cdbb6] <==
	I0729 11:16:00.907707       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-893477-m02\" does not exist"
	I0729 11:16:00.920269       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-893477-m02" podCIDRs=["10.244.1.0/24"]
	I0729 11:16:02.833884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.617µs"
	I0729 11:16:02.852163       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.394µs"
	I0729 11:16:02.891030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.311µs"
	I0729 11:16:02.900278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.058µs"
	I0729 11:16:02.902950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.733µs"
	I0729 11:16:03.772079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.046µs"
	I0729 11:16:20.686130       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:16:20.705400       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.969µs"
	I0729 11:16:20.720491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.663µs"
	I0729 11:16:24.281493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.9882ms"
	I0729 11:16:24.282562       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.862µs"
	I0729 11:16:39.024136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:16:40.260849       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:16:40.260984       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-893477-m03\" does not exist"
	I0729 11:16:40.273415       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-893477-m03" podCIDRs=["10.244.2.0/24"]
	I0729 11:16:59.836331       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:17:05.441799       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:17:44.155139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.525958ms"
	I0729 11:17:44.155705       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.793µs"
	I0729 11:17:54.035308       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-pxmtg"
	I0729 11:17:54.064676       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-pxmtg"
	I0729 11:17:54.064723       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mfhng"
	I0729 11:17:54.092310       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mfhng"
	
	
	==> kube-controller-manager [eeb0db57c16891281f3f9f275246d336e7652b81a432c23c5d71a8b4b3b42ffb] <==
	I0729 11:08:42.721952       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0729 11:09:16.584853       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-893477-m02\" does not exist"
	I0729 11:09:16.598111       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-893477-m02" podCIDRs=["10.244.1.0/24"]
	I0729 11:09:17.727185       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-893477-m02"
	I0729 11:09:36.191477       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:09:38.586774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.209794ms"
	I0729 11:09:38.600177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.567859ms"
	I0729 11:09:38.600429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.15µs"
	I0729 11:09:42.359207       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.267346ms"
	I0729 11:09:42.359489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.018µs"
	I0729 11:09:42.454828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.988108ms"
	I0729 11:09:42.455472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.739µs"
	I0729 11:10:14.918950       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-893477-m03\" does not exist"
	I0729 11:10:14.919024       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:10:14.932790       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-893477-m03" podCIDRs=["10.244.2.0/24"]
	I0729 11:10:17.749150       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-893477-m03"
	I0729 11:10:36.142178       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:11:04.739826       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:11:06.187973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:11:06.188467       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-893477-m03\" does not exist"
	I0729 11:11:06.196113       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-893477-m03" podCIDRs=["10.244.3.0/24"]
	I0729 11:11:25.950317       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:12:07.805305       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-893477-m02"
	I0729 11:12:12.902146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.882904ms"
	I0729 11:12:12.902415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.469µs"
	
	
	==> kube-proxy [4199b2f23de6b892f3ca152bbba39d826902dd855e6940c3898b8631c1dcc33a] <==
	I0729 11:15:21.774257       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:15:21.793715       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.159"]
	I0729 11:15:21.883827       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:15:21.883926       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:15:21.883958       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:15:21.886568       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:15:21.886913       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:15:21.887245       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:15:21.889364       1 config.go:192] "Starting service config controller"
	I0729 11:15:21.889621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:15:21.889982       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:15:21.890017       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:15:21.890514       1 config.go:319] "Starting node config controller"
	I0729 11:15:21.890556       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:15:21.991845       1 shared_informer.go:320] Caches are synced for node config
	I0729 11:15:21.991939       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:15:21.992015       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d0df24cda44f0feae0fc939e47c5b99a7b125dd715714e8dd7fd2a5fdf62584f] <==
	I0729 11:08:25.732169       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:08:25.747066       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.159"]
	I0729 11:08:25.781907       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:08:25.781975       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:08:25.781992       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:08:25.784947       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:08:25.785174       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:08:25.785205       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:08:25.786784       1 config.go:192] "Starting service config controller"
	I0729 11:08:25.787008       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:08:25.787055       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:08:25.787075       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:08:25.787653       1 config.go:319] "Starting node config controller"
	I0729 11:08:25.787679       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:08:25.888133       1 shared_informer.go:320] Caches are synced for node config
	I0729 11:08:25.888234       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:08:25.888254       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [15dc31d0fa5d36287dc6569cba661bdd1c270db0a6107479fa7e48c1d5d0276f] <==
	E0729 11:08:07.440381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:08:07.440393       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:08:07.440400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:08:08.291389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:08:08.291437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:08:08.324120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:08:08.324171       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 11:08:08.363548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:08:08.364146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:08:08.507004       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:08:08.507050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:08:08.536309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:08:08.536365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:08:08.590453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:08:08.590578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 11:08:08.627114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:08:08.627166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:08:08.660309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:08:08.660824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:08:08.676113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:08:08.676158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:08:08.775168       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:08:08.775217       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 11:08:11.325202       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 11:13:31.708634       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b645b366743a37de89c3a43ff6d94b7a323259dce39ec144aa1db107fc0f1d64] <==
	I0729 11:15:18.721055       1 serving.go:380] Generated self-signed cert in-memory
	W0729 11:15:20.721149       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 11:15:20.721241       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:15:20.721251       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 11:15:20.721258       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 11:15:20.779179       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 11:15:20.779223       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:15:20.783012       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 11:15:20.783144       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 11:15:20.783174       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:15:20.783189       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 11:15:20.883518       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.932915    3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d8307277-6866-4782-af18-0b3af40c2684-tmp\") pod \"storage-provisioner\" (UID: \"d8307277-6866-4782-af18-0b3af40c2684\") " pod="kube-system/storage-provisioner"
	Jul 29 11:15:20 multinode-893477 kubelet[3250]: I0729 11:15:20.934306    3250 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 11:15:21 multinode-893477 kubelet[3250]: I0729 11:15:21.019089    3250 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 11:15:23 multinode-893477 kubelet[3250]: I0729 11:15:23.109569    3250 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 11:15:28 multinode-893477 kubelet[3250]: I0729 11:15:28.042969    3250 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 11:16:17 multinode-893477 kubelet[3250]: E0729 11:16:17.005554    3250 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:16:17 multinode-893477 kubelet[3250]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:16:17 multinode-893477 kubelet[3250]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:16:17 multinode-893477 kubelet[3250]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:16:17 multinode-893477 kubelet[3250]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:17:17 multinode-893477 kubelet[3250]: E0729 11:17:17.007819    3250 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:17:17 multinode-893477 kubelet[3250]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:17:17 multinode-893477 kubelet[3250]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:17:17 multinode-893477 kubelet[3250]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:17:17 multinode-893477 kubelet[3250]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:18:17 multinode-893477 kubelet[3250]: E0729 11:18:17.006259    3250 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:18:17 multinode-893477 kubelet[3250]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:18:17 multinode-893477 kubelet[3250]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:18:17 multinode-893477 kubelet[3250]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:18:17 multinode-893477 kubelet[3250]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:19:17 multinode-893477 kubelet[3250]: E0729 11:19:17.006266    3250 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:19:17 multinode-893477 kubelet[3250]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:19:17 multinode-893477 kubelet[3250]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:19:17 multinode-893477 kubelet[3250]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:19:17 multinode-893477 kubelet[3250]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:19:26.340554   43228 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19337-3845/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-893477 -n multinode-893477
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-893477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.43s)

                                                
                                    
x
+
TestPreload (176.4s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-310966 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0729 11:24:40.969335   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-310966 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.641517949s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-310966 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-310966 image pull gcr.io/k8s-minikube/busybox: (2.822105029s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-310966
E0729 11:24:57.916054   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-310966: (7.298124865s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-310966 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-310966 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m6.535499266s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-310966 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-29 11:26:10.575383132 +0000 UTC m=+3948.659111607
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-310966 -n test-preload-310966
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-310966 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-310966 logs -n 25: (1.110541477s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n multinode-893477 sudo cat                                       | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-893477-m03_multinode-893477.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-893477 cp multinode-893477-m03:/home/docker/cp-test.txt                       | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m02:/home/docker/cp-test_multinode-893477-m03_multinode-893477-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n                                                                 | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | multinode-893477-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-893477 ssh -n multinode-893477-m02 sudo cat                                   | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-893477-m03_multinode-893477-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-893477 node stop m03                                                          | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	| node    | multinode-893477 node start                                                             | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-893477                                                                | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:11 UTC |                     |
	| stop    | -p multinode-893477                                                                     | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:11 UTC |                     |
	| start   | -p multinode-893477                                                                     | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:13 UTC | 29 Jul 24 11:17 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-893477                                                                | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:17 UTC |                     |
	| node    | multinode-893477 node delete                                                            | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:17 UTC | 29 Jul 24 11:17 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-893477 stop                                                                   | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:17 UTC |                     |
	| start   | -p multinode-893477                                                                     | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:19 UTC | 29 Jul 24 11:22 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-893477                                                                | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:22 UTC |                     |
	| start   | -p multinode-893477-m02                                                                 | multinode-893477-m02 | jenkins | v1.33.1 | 29 Jul 24 11:22 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-893477-m03                                                                 | multinode-893477-m03 | jenkins | v1.33.1 | 29 Jul 24 11:22 UTC | 29 Jul 24 11:23 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-893477                                                                 | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:23 UTC |                     |
	| delete  | -p multinode-893477-m03                                                                 | multinode-893477-m03 | jenkins | v1.33.1 | 29 Jul 24 11:23 UTC | 29 Jul 24 11:23 UTC |
	| delete  | -p multinode-893477                                                                     | multinode-893477     | jenkins | v1.33.1 | 29 Jul 24 11:23 UTC | 29 Jul 24 11:23 UTC |
	| start   | -p test-preload-310966                                                                  | test-preload-310966  | jenkins | v1.33.1 | 29 Jul 24 11:23 UTC | 29 Jul 24 11:24 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-310966 image pull                                                          | test-preload-310966  | jenkins | v1.33.1 | 29 Jul 24 11:24 UTC | 29 Jul 24 11:24 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-310966                                                                  | test-preload-310966  | jenkins | v1.33.1 | 29 Jul 24 11:24 UTC | 29 Jul 24 11:25 UTC |
	| start   | -p test-preload-310966                                                                  | test-preload-310966  | jenkins | v1.33.1 | 29 Jul 24 11:25 UTC | 29 Jul 24 11:26 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-310966 image list                                                          | test-preload-310966  | jenkins | v1.33.1 | 29 Jul 24 11:26 UTC | 29 Jul 24 11:26 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:25:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:25:03.852886   45576 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:25:03.853129   45576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:25:03.853139   45576 out.go:304] Setting ErrFile to fd 2...
	I0729 11:25:03.853144   45576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:25:03.853300   45576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:25:03.853798   45576 out.go:298] Setting JSON to false
	I0729 11:25:03.854676   45576 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4050,"bootTime":1722248254,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:25:03.854757   45576 start.go:139] virtualization: kvm guest
	I0729 11:25:03.857459   45576 out.go:177] * [test-preload-310966] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:25:03.859090   45576 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:25:03.859113   45576 notify.go:220] Checking for updates...
	I0729 11:25:03.861873   45576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:25:03.863380   45576 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:25:03.865143   45576 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:25:03.866694   45576 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:25:03.868101   45576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:25:03.869755   45576 config.go:182] Loaded profile config "test-preload-310966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 11:25:03.870139   45576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:25:03.870190   45576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:25:03.884720   45576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
	I0729 11:25:03.885170   45576 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:25:03.885697   45576 main.go:141] libmachine: Using API Version  1
	I0729 11:25:03.885718   45576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:25:03.886076   45576 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:25:03.886250   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:25:03.888261   45576 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 11:25:03.889962   45576 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:25:03.890289   45576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:25:03.890339   45576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:25:03.904940   45576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0729 11:25:03.905353   45576 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:25:03.905854   45576 main.go:141] libmachine: Using API Version  1
	I0729 11:25:03.905881   45576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:25:03.906296   45576 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:25:03.906497   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:25:03.941565   45576 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:25:03.942859   45576 start.go:297] selected driver: kvm2
	I0729 11:25:03.942873   45576 start.go:901] validating driver "kvm2" against &{Name:test-preload-310966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-310966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:25:03.943001   45576 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:25:03.943692   45576 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:25:03.943772   45576 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:25:03.959100   45576 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:25:03.959442   45576 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:25:03.959517   45576 cni.go:84] Creating CNI manager for ""
	I0729 11:25:03.959532   45576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:25:03.959616   45576 start.go:340] cluster config:
	{Name:test-preload-310966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-310966 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:25:03.959726   45576 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:25:03.961695   45576 out.go:177] * Starting "test-preload-310966" primary control-plane node in "test-preload-310966" cluster
	I0729 11:25:03.963212   45576 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 11:25:04.071459   45576 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0729 11:25:04.071513   45576 cache.go:56] Caching tarball of preloaded images
	I0729 11:25:04.071696   45576 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 11:25:04.073819   45576 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0729 11:25:04.075461   45576 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 11:25:04.191630   45576 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0729 11:25:16.223617   45576 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 11:25:16.223726   45576 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 11:25:17.067588   45576 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0729 11:25:17.067741   45576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/config.json ...
	I0729 11:25:17.067975   45576 start.go:360] acquireMachinesLock for test-preload-310966: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:25:17.068039   45576 start.go:364] duration metric: took 43.078µs to acquireMachinesLock for "test-preload-310966"
	I0729 11:25:17.068059   45576 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:25:17.068068   45576 fix.go:54] fixHost starting: 
	I0729 11:25:17.068364   45576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:25:17.068402   45576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:25:17.082937   45576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0729 11:25:17.083379   45576 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:25:17.083833   45576 main.go:141] libmachine: Using API Version  1
	I0729 11:25:17.083852   45576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:25:17.084221   45576 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:25:17.084428   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:25:17.084595   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetState
	I0729 11:25:17.086357   45576 fix.go:112] recreateIfNeeded on test-preload-310966: state=Stopped err=<nil>
	I0729 11:25:17.086379   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	W0729 11:25:17.086544   45576 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:25:17.089232   45576 out.go:177] * Restarting existing kvm2 VM for "test-preload-310966" ...
	I0729 11:25:17.090589   45576 main.go:141] libmachine: (test-preload-310966) Calling .Start
	I0729 11:25:17.090809   45576 main.go:141] libmachine: (test-preload-310966) Ensuring networks are active...
	I0729 11:25:17.091677   45576 main.go:141] libmachine: (test-preload-310966) Ensuring network default is active
	I0729 11:25:17.092016   45576 main.go:141] libmachine: (test-preload-310966) Ensuring network mk-test-preload-310966 is active
	I0729 11:25:17.092391   45576 main.go:141] libmachine: (test-preload-310966) Getting domain xml...
	I0729 11:25:17.093060   45576 main.go:141] libmachine: (test-preload-310966) Creating domain...
	I0729 11:25:18.283803   45576 main.go:141] libmachine: (test-preload-310966) Waiting to get IP...
	I0729 11:25:18.284679   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:18.285102   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:18.285134   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:18.285053   45659 retry.go:31] will retry after 247.108147ms: waiting for machine to come up
	I0729 11:25:18.533686   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:18.534316   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:18.534344   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:18.534285   45659 retry.go:31] will retry after 317.041768ms: waiting for machine to come up
	I0729 11:25:18.853086   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:18.853474   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:18.853503   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:18.853445   45659 retry.go:31] will retry after 449.919691ms: waiting for machine to come up
	I0729 11:25:19.305215   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:19.305617   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:19.305644   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:19.305567   45659 retry.go:31] will retry after 394.780923ms: waiting for machine to come up
	I0729 11:25:19.702111   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:19.702568   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:19.702592   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:19.702518   45659 retry.go:31] will retry after 671.306608ms: waiting for machine to come up
	I0729 11:25:20.375341   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:20.375710   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:20.375737   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:20.375668   45659 retry.go:31] will retry after 830.539368ms: waiting for machine to come up
	I0729 11:25:21.207492   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:21.207973   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:21.207995   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:21.207924   45659 retry.go:31] will retry after 738.754716ms: waiting for machine to come up
	I0729 11:25:21.947929   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:21.948374   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:21.948402   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:21.948318   45659 retry.go:31] will retry after 964.617142ms: waiting for machine to come up
	I0729 11:25:22.915123   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:22.915538   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:22.915565   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:22.915482   45659 retry.go:31] will retry after 1.414419712s: waiting for machine to come up
	I0729 11:25:24.331856   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:24.332309   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:24.332340   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:24.332228   45659 retry.go:31] will retry after 2.170137413s: waiting for machine to come up
	I0729 11:25:26.505727   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:26.506168   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:26.506194   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:26.506121   45659 retry.go:31] will retry after 2.783140828s: waiting for machine to come up
	I0729 11:25:29.291793   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:29.292290   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:29.292311   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:29.292256   45659 retry.go:31] will retry after 2.793645572s: waiting for machine to come up
	I0729 11:25:32.088586   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:32.088989   45576 main.go:141] libmachine: (test-preload-310966) DBG | unable to find current IP address of domain test-preload-310966 in network mk-test-preload-310966
	I0729 11:25:32.089024   45576 main.go:141] libmachine: (test-preload-310966) DBG | I0729 11:25:32.088943   45659 retry.go:31] will retry after 4.238319438s: waiting for machine to come up
	I0729 11:25:36.331602   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.331953   45576 main.go:141] libmachine: (test-preload-310966) Found IP for machine: 192.168.39.84
	I0729 11:25:36.331968   45576 main.go:141] libmachine: (test-preload-310966) Reserving static IP address...
	I0729 11:25:36.331981   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has current primary IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.332384   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "test-preload-310966", mac: "52:54:00:8c:87:77", ip: "192.168.39.84"} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:36.332414   45576 main.go:141] libmachine: (test-preload-310966) DBG | skip adding static IP to network mk-test-preload-310966 - found existing host DHCP lease matching {name: "test-preload-310966", mac: "52:54:00:8c:87:77", ip: "192.168.39.84"}
	I0729 11:25:36.332432   45576 main.go:141] libmachine: (test-preload-310966) Reserved static IP address: 192.168.39.84
	I0729 11:25:36.332449   45576 main.go:141] libmachine: (test-preload-310966) Waiting for SSH to be available...
	I0729 11:25:36.332480   45576 main.go:141] libmachine: (test-preload-310966) DBG | Getting to WaitForSSH function...
	I0729 11:25:36.334432   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.334750   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:36.334782   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.334919   45576 main.go:141] libmachine: (test-preload-310966) DBG | Using SSH client type: external
	I0729 11:25:36.334945   45576 main.go:141] libmachine: (test-preload-310966) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/test-preload-310966/id_rsa (-rw-------)
	I0729 11:25:36.334990   45576 main.go:141] libmachine: (test-preload-310966) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/test-preload-310966/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:25:36.335020   45576 main.go:141] libmachine: (test-preload-310966) DBG | About to run SSH command:
	I0729 11:25:36.335035   45576 main.go:141] libmachine: (test-preload-310966) DBG | exit 0
	I0729 11:25:36.458937   45576 main.go:141] libmachine: (test-preload-310966) DBG | SSH cmd err, output: <nil>: 
	I0729 11:25:36.459349   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetConfigRaw
	I0729 11:25:36.460032   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetIP
	I0729 11:25:36.462363   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.462740   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:36.462769   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.462939   45576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/config.json ...
	I0729 11:25:36.463249   45576 machine.go:94] provisionDockerMachine start ...
	I0729 11:25:36.463272   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:25:36.463483   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:25:36.465578   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.465866   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:36.465902   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.466038   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:25:36.466230   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:36.466383   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:36.466514   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:25:36.466655   45576 main.go:141] libmachine: Using SSH client type: native
	I0729 11:25:36.466861   45576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0729 11:25:36.466880   45576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:25:36.571092   45576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:25:36.571119   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetMachineName
	I0729 11:25:36.571348   45576 buildroot.go:166] provisioning hostname "test-preload-310966"
	I0729 11:25:36.571364   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetMachineName
	I0729 11:25:36.571527   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:25:36.574033   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.574397   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:36.574419   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.574547   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:25:36.574742   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:36.574898   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:36.575043   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:25:36.575183   45576 main.go:141] libmachine: Using SSH client type: native
	I0729 11:25:36.575374   45576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0729 11:25:36.575389   45576 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-310966 && echo "test-preload-310966" | sudo tee /etc/hostname
	I0729 11:25:36.693872   45576 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-310966
	
	I0729 11:25:36.693902   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:25:36.696727   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.697141   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:36.697168   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.697335   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:25:36.697526   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:36.697690   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:36.697886   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:25:36.698057   45576 main.go:141] libmachine: Using SSH client type: native
	I0729 11:25:36.698251   45576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0729 11:25:36.698277   45576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-310966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-310966/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-310966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:25:36.812317   45576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:25:36.812351   45576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:25:36.812371   45576 buildroot.go:174] setting up certificates
	I0729 11:25:36.812380   45576 provision.go:84] configureAuth start
	I0729 11:25:36.812388   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetMachineName
	I0729 11:25:36.812680   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetIP
	I0729 11:25:36.815511   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.815834   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:36.815887   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.816047   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:25:36.818197   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.818509   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:36.818532   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.818743   45576 provision.go:143] copyHostCerts
	I0729 11:25:36.818821   45576 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:25:36.818838   45576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:25:36.818900   45576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:25:36.818990   45576 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:25:36.818999   45576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:25:36.819024   45576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:25:36.819079   45576 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:25:36.819088   45576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:25:36.819108   45576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:25:36.819193   45576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.test-preload-310966 san=[127.0.0.1 192.168.39.84 localhost minikube test-preload-310966]
	I0729 11:25:36.932492   45576 provision.go:177] copyRemoteCerts
	I0729 11:25:36.932550   45576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:25:36.932574   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:25:36.935242   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.935581   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:36.935609   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:36.935748   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:25:36.935942   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:36.936088   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:25:36.936248   45576 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/test-preload-310966/id_rsa Username:docker}
	I0729 11:25:37.017164   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:25:37.041936   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 11:25:37.066019   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:25:37.090249   45576 provision.go:87] duration metric: took 277.856824ms to configureAuth
	I0729 11:25:37.090275   45576 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:25:37.090425   45576 config.go:182] Loaded profile config "test-preload-310966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 11:25:37.090484   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:25:37.093020   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.093404   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:37.093436   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.093543   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:25:37.093801   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:37.093966   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:37.094136   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:25:37.094317   45576 main.go:141] libmachine: Using SSH client type: native
	I0729 11:25:37.094465   45576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0729 11:25:37.094479   45576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:25:37.359156   45576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:25:37.359189   45576 machine.go:97] duration metric: took 895.922626ms to provisionDockerMachine
	I0729 11:25:37.359205   45576 start.go:293] postStartSetup for "test-preload-310966" (driver="kvm2")
	I0729 11:25:37.359227   45576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:25:37.359253   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:25:37.359584   45576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:25:37.359615   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:25:37.362325   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.362630   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:37.362670   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.362847   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:25:37.363058   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:37.363201   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:25:37.363339   45576 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/test-preload-310966/id_rsa Username:docker}
	I0729 11:25:37.446159   45576 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:25:37.450359   45576 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:25:37.450383   45576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:25:37.450445   45576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:25:37.450526   45576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:25:37.450620   45576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:25:37.460371   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:25:37.484052   45576 start.go:296] duration metric: took 124.832639ms for postStartSetup
	I0729 11:25:37.484096   45576 fix.go:56] duration metric: took 20.416027662s for fixHost
	I0729 11:25:37.484122   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:25:37.487187   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.487543   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:37.487569   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.487745   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:25:37.487985   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:37.488172   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:37.488368   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:25:37.488548   45576 main.go:141] libmachine: Using SSH client type: native
	I0729 11:25:37.488788   45576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0729 11:25:37.488805   45576 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:25:37.595866   45576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252337.571986234
	
	I0729 11:25:37.595891   45576 fix.go:216] guest clock: 1722252337.571986234
	I0729 11:25:37.595900   45576 fix.go:229] Guest: 2024-07-29 11:25:37.571986234 +0000 UTC Remote: 2024-07-29 11:25:37.484101376 +0000 UTC m=+33.664922609 (delta=87.884858ms)
	I0729 11:25:37.595923   45576 fix.go:200] guest clock delta is within tolerance: 87.884858ms
	I0729 11:25:37.595930   45576 start.go:83] releasing machines lock for "test-preload-310966", held for 20.527879626s
	I0729 11:25:37.595959   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:25:37.596279   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetIP
	I0729 11:25:37.598793   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.599177   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:37.599208   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.599333   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:25:37.599829   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:25:37.599995   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:25:37.600087   45576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:25:37.600122   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:25:37.600238   45576 ssh_runner.go:195] Run: cat /version.json
	I0729 11:25:37.600254   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:25:37.602829   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.603134   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.603176   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:37.603197   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.603372   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:25:37.603549   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:37.603624   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:37.603656   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:37.603701   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:25:37.603822   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:25:37.603886   45576 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/test-preload-310966/id_rsa Username:docker}
	I0729 11:25:37.603957   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:25:37.604109   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:25:37.604266   45576 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/test-preload-310966/id_rsa Username:docker}
	I0729 11:25:37.680247   45576 ssh_runner.go:195] Run: systemctl --version
	I0729 11:25:37.702997   45576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:25:37.851546   45576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:25:37.857866   45576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:25:37.857944   45576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:25:37.875174   45576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:25:37.875199   45576 start.go:495] detecting cgroup driver to use...
	I0729 11:25:37.875268   45576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:25:37.891849   45576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:25:37.906923   45576 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:25:37.906982   45576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:25:37.921307   45576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:25:37.936038   45576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:25:38.053300   45576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:25:38.203314   45576 docker.go:233] disabling docker service ...
	I0729 11:25:38.203373   45576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:25:38.218036   45576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:25:38.231315   45576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:25:38.361582   45576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:25:38.485879   45576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:25:38.500548   45576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:25:38.519598   45576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0729 11:25:38.519657   45576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:25:38.530728   45576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:25:38.530785   45576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:25:38.541897   45576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:25:38.552642   45576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:25:38.563612   45576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:25:38.575043   45576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:25:38.586209   45576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:25:38.603808   45576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:25:38.615460   45576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:25:38.625473   45576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:25:38.625537   45576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:25:38.639675   45576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:25:38.649649   45576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:25:38.778263   45576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:25:38.916836   45576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:25:38.916895   45576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:25:38.921923   45576 start.go:563] Will wait 60s for crictl version
	I0729 11:25:38.921969   45576 ssh_runner.go:195] Run: which crictl
	I0729 11:25:38.925803   45576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:25:38.970596   45576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:25:38.970686   45576 ssh_runner.go:195] Run: crio --version
	I0729 11:25:38.998970   45576 ssh_runner.go:195] Run: crio --version
	I0729 11:25:39.029505   45576 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0729 11:25:39.031101   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetIP
	I0729 11:25:39.033784   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:39.034077   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:25:39.034115   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:25:39.034242   45576 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:25:39.038450   45576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:25:39.051699   45576 kubeadm.go:883] updating cluster {Name:test-preload-310966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-310966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:25:39.051818   45576 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 11:25:39.051868   45576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:25:39.093933   45576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0729 11:25:39.093994   45576 ssh_runner.go:195] Run: which lz4
	I0729 11:25:39.098197   45576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:25:39.102302   45576 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:25:39.102335   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0729 11:25:40.710856   45576 crio.go:462] duration metric: took 1.612683028s to copy over tarball
	I0729 11:25:40.710920   45576 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:25:43.073977   45576 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.363027792s)
	I0729 11:25:43.074005   45576 crio.go:469] duration metric: took 2.363125434s to extract the tarball
	I0729 11:25:43.074011   45576 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:25:43.116737   45576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:25:43.161924   45576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0729 11:25:43.161947   45576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:25:43.162032   45576 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:25:43.162046   45576 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 11:25:43.162079   45576 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 11:25:43.162086   45576 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 11:25:43.162103   45576 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 11:25:43.162139   45576 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 11:25:43.162059   45576 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 11:25:43.162034   45576 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 11:25:43.163698   45576 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 11:25:43.163711   45576 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:25:43.163717   45576 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 11:25:43.163723   45576 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 11:25:43.163703   45576 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 11:25:43.163713   45576 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 11:25:43.163716   45576 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 11:25:43.163756   45576 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 11:25:43.329483   45576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 11:25:43.369828   45576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 11:25:43.373529   45576 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0729 11:25:43.373560   45576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 11:25:43.373591   45576 ssh_runner.go:195] Run: which crictl
	I0729 11:25:43.410506   45576 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0729 11:25:43.410550   45576 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0729 11:25:43.410572   45576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0729 11:25:43.410591   45576 ssh_runner.go:195] Run: which crictl
	I0729 11:25:43.412755   45576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0729 11:25:43.452146   45576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0729 11:25:43.452218   45576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0729 11:25:43.452240   45576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 11:25:43.468844   45576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0729 11:25:43.468893   45576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 11:25:43.468938   45576 ssh_runner.go:195] Run: which crictl
	I0729 11:25:43.495628   45576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0729 11:25:43.495651   45576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 11:25:43.495671   45576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0729 11:25:43.495685   45576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0729 11:25:43.495689   45576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0729 11:25:43.495741   45576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 11:25:43.503348   45576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 11:25:43.505677   45576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0729 11:25:43.529531   45576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0729 11:25:43.575054   45576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 11:25:44.022095   45576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:25:46.919206   45576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (3.423499527s)
	I0729 11:25:46.919241   45576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 11:25:46.919265   45576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (3.423502636s)
	I0729 11:25:46.919295   45576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0729 11:25:46.919307   45576 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 11:25:46.919319   45576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4: (3.423618124s)
	I0729 11:25:46.919354   45576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 11:25:46.919358   45576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0729 11:25:46.919371   45576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6: (3.415998326s)
	I0729 11:25:46.919397   45576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0729 11:25:46.919420   45576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 11:25:46.919437   45576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4: (3.413734794s)
	I0729 11:25:46.919444   45576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 11:25:46.919451   45576 ssh_runner.go:195] Run: which crictl
	I0729 11:25:46.919473   45576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0729 11:25:46.919473   45576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4: (3.389916907s)
	I0729 11:25:46.919495   45576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 11:25:46.919506   45576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0729 11:25:46.919518   45576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4: (3.344440045s)
	I0729 11:25:46.919525   45576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 11:25:46.919533   45576 ssh_runner.go:195] Run: which crictl
	I0729 11:25:46.919553   45576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0729 11:25:46.919574   45576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 11:25:46.919556   45576 ssh_runner.go:195] Run: which crictl
	I0729 11:25:46.919609   45576 ssh_runner.go:195] Run: which crictl
	I0729 11:25:46.919580   45576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.897453972s)
	I0729 11:25:47.068767   45576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0729 11:25:47.068832   45576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0729 11:25:47.068869   45576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 11:25:47.068923   45576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 11:25:47.068972   45576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 11:25:47.069011   45576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0729 11:25:47.069033   45576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0729 11:25:47.069087   45576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 11:25:47.992657   45576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0729 11:25:47.992800   45576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 11:25:47.992887   45576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 11:25:47.992900   45576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 11:25:47.992919   45576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 11:25:47.992968   45576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 11:25:47.992993   45576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 11:25:47.993004   45576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 11:25:47.993067   45576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 11:25:48.004020   45576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0729 11:25:48.004044   45576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 11:25:48.004071   45576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0729 11:25:48.004094   45576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0729 11:25:48.004123   45576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0729 11:25:48.004135   45576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0729 11:25:48.449908   45576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 11:25:48.449960   45576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 11:25:48.450002   45576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 11:25:49.189559   45576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0729 11:25:49.189617   45576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 11:25:49.189662   45576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 11:25:49.933861   45576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0729 11:25:49.933910   45576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 11:25:49.933960   45576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 11:25:50.374472   45576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0729 11:25:50.374520   45576 cache_images.go:123] Successfully loaded all cached images
	I0729 11:25:50.374527   45576 cache_images.go:92] duration metric: took 7.212565437s to LoadCachedImages
	I0729 11:25:50.374540   45576 kubeadm.go:934] updating node { 192.168.39.84 8443 v1.24.4 crio true true} ...
	I0729 11:25:50.374643   45576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-310966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-310966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:25:50.374727   45576 ssh_runner.go:195] Run: crio config
	I0729 11:25:50.421665   45576 cni.go:84] Creating CNI manager for ""
	I0729 11:25:50.421685   45576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:25:50.421697   45576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:25:50.421715   45576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-310966 NodeName:test-preload-310966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:25:50.421837   45576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-310966"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:25:50.421902   45576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0729 11:25:50.431697   45576 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:25:50.431760   45576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:25:50.441244   45576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0729 11:25:50.458011   45576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:25:50.474683   45576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0729 11:25:50.491995   45576 ssh_runner.go:195] Run: grep 192.168.39.84	control-plane.minikube.internal$ /etc/hosts
	I0729 11:25:50.495803   45576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:25:50.507559   45576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:25:50.621153   45576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:25:50.638050   45576 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966 for IP: 192.168.39.84
	I0729 11:25:50.638073   45576 certs.go:194] generating shared ca certs ...
	I0729 11:25:50.638088   45576 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:25:50.638230   45576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:25:50.638285   45576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:25:50.638296   45576 certs.go:256] generating profile certs ...
	I0729 11:25:50.638395   45576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/client.key
	I0729 11:25:50.638476   45576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/apiserver.key.dfecd5e1
	I0729 11:25:50.638529   45576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/proxy-client.key
	I0729 11:25:50.638678   45576 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:25:50.638748   45576 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:25:50.638762   45576 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:25:50.638787   45576 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:25:50.638829   45576 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:25:50.638858   45576 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:25:50.638914   45576 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:25:50.639751   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:25:50.686090   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:25:50.717668   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:25:50.752883   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:25:50.795390   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 11:25:50.827981   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:25:50.867459   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:25:50.893034   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:25:50.918540   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:25:50.942924   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:25:50.966808   45576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:25:50.990421   45576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:25:51.007358   45576 ssh_runner.go:195] Run: openssl version
	I0729 11:25:51.013289   45576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:25:51.024721   45576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:25:51.029447   45576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:25:51.029508   45576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:25:51.035574   45576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:25:51.046718   45576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:25:51.058209   45576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:25:51.062886   45576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:25:51.062931   45576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:25:51.068874   45576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:25:51.080843   45576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:25:51.092507   45576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:25:51.097314   45576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:25:51.097372   45576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:25:51.103278   45576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:25:51.114864   45576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:25:51.119710   45576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:25:51.125988   45576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:25:51.132049   45576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:25:51.138179   45576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:25:51.144339   45576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:25:51.150363   45576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:25:51.156306   45576 kubeadm.go:392] StartCluster: {Name:test-preload-310966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-310966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:25:51.156423   45576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:25:51.156493   45576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:25:51.199721   45576 cri.go:89] found id: ""
	I0729 11:25:51.199797   45576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:25:51.209856   45576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:25:51.209887   45576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:25:51.209943   45576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:25:51.219613   45576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:25:51.220048   45576 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-310966" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:25:51.220143   45576 kubeconfig.go:62] /home/jenkins/minikube-integration/19337-3845/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-310966" cluster setting kubeconfig missing "test-preload-310966" context setting]
	I0729 11:25:51.220449   45576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:25:51.221056   45576 kapi.go:59] client config for test-preload-310966: &rest.Config{Host:"https://192.168.39.84:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/client.crt", KeyFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/client.key", CAFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 11:25:51.221601   45576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:25:51.231424   45576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.84
	I0729 11:25:51.231460   45576 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:25:51.231473   45576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:25:51.231582   45576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:25:51.269955   45576 cri.go:89] found id: ""
	I0729 11:25:51.270055   45576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:25:51.286133   45576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:25:51.295561   45576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:25:51.295580   45576 kubeadm.go:157] found existing configuration files:
	
	I0729 11:25:51.295639   45576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:25:51.304757   45576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:25:51.304834   45576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:25:51.314283   45576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:25:51.323598   45576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:25:51.323645   45576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:25:51.332781   45576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:25:51.341530   45576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:25:51.341610   45576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:25:51.350729   45576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:25:51.359495   45576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:25:51.359548   45576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:25:51.368678   45576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:25:51.377852   45576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:25:51.474920   45576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:25:51.955260   45576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:25:52.206635   45576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:25:52.266683   45576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:25:52.339580   45576 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:25:52.339655   45576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:25:52.840471   45576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:25:53.339851   45576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:25:53.388917   45576 api_server.go:72] duration metric: took 1.049335166s to wait for apiserver process to appear ...
	I0729 11:25:53.388940   45576 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:25:53.388957   45576 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0729 11:25:53.389506   45576 api_server.go:269] stopped: https://192.168.39.84:8443/healthz: Get "https://192.168.39.84:8443/healthz": dial tcp 192.168.39.84:8443: connect: connection refused
	I0729 11:25:53.889563   45576 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0729 11:25:57.530036   45576 api_server.go:279] https://192.168.39.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:25:57.530066   45576 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:25:57.530083   45576 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0729 11:25:57.653175   45576 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:25:57.653211   45576 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:25:57.889570   45576 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0729 11:25:57.894659   45576 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:25:57.894692   45576 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:25:58.389213   45576 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0729 11:25:58.396234   45576 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:25:58.396269   45576 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:25:58.890076   45576 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0729 11:25:58.895629   45576 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0729 11:25:58.901932   45576 api_server.go:141] control plane version: v1.24.4
	I0729 11:25:58.901955   45576 api_server.go:131] duration metric: took 5.513009168s to wait for apiserver health ...
	I0729 11:25:58.901963   45576 cni.go:84] Creating CNI manager for ""
	I0729 11:25:58.901971   45576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:25:58.903507   45576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:25:58.904701   45576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:25:58.915243   45576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:25:58.932726   45576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:25:58.954781   45576 system_pods.go:59] 8 kube-system pods found
	I0729 11:25:58.954807   45576 system_pods.go:61] "coredns-6d4b75cb6d-7k4pb" [6ee6e009-c299-48ae-ac4c-cb2df34a9ce4] Running
	I0729 11:25:58.954812   45576 system_pods.go:61] "coredns-6d4b75cb6d-ll72b" [a8644ef3-fd75-4c62-8abd-47fb4c02884e] Running
	I0729 11:25:58.954817   45576 system_pods.go:61] "etcd-test-preload-310966" [90d3968b-aa91-4e6e-8ef3-35946188f78d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:25:58.954828   45576 system_pods.go:61] "kube-apiserver-test-preload-310966" [0205b89b-4ca6-41d0-82a0-4c5dbcb0b95f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:25:58.954834   45576 system_pods.go:61] "kube-controller-manager-test-preload-310966" [1eaa49ea-1069-4a14-af34-34b317b2f937] Running
	I0729 11:25:58.954837   45576 system_pods.go:61] "kube-proxy-82zpt" [dd873001-dd0f-4f80-9bf5-f0d4550aa1c1] Running
	I0729 11:25:58.954841   45576 system_pods.go:61] "kube-scheduler-test-preload-310966" [52acc999-a2fe-4f83-ab1c-feaa8e31ebb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:25:58.954845   45576 system_pods.go:61] "storage-provisioner" [38ca7a34-defc-45f6-924e-afa308acfab7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:25:58.954851   45576 system_pods.go:74] duration metric: took 22.106378ms to wait for pod list to return data ...
	I0729 11:25:58.954857   45576 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:25:58.958106   45576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:25:58.958135   45576 node_conditions.go:123] node cpu capacity is 2
	I0729 11:25:58.958145   45576 node_conditions.go:105] duration metric: took 3.283299ms to run NodePressure ...
	I0729 11:25:58.958159   45576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:25:59.128570   45576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:25:59.132135   45576 kubeadm.go:739] kubelet initialised
	I0729 11:25:59.132163   45576 kubeadm.go:740] duration metric: took 3.564041ms waiting for restarted kubelet to initialise ...
	I0729 11:25:59.132172   45576 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:25:59.136745   45576 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-7k4pb" in "kube-system" namespace to be "Ready" ...
	I0729 11:25:59.141941   45576 pod_ready.go:97] node "test-preload-310966" hosting pod "coredns-6d4b75cb6d-7k4pb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:25:59.141964   45576 pod_ready.go:81] duration metric: took 5.196673ms for pod "coredns-6d4b75cb6d-7k4pb" in "kube-system" namespace to be "Ready" ...
	E0729 11:25:59.141981   45576 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-310966" hosting pod "coredns-6d4b75cb6d-7k4pb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:25:59.141996   45576 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-ll72b" in "kube-system" namespace to be "Ready" ...
	I0729 11:25:59.146102   45576 pod_ready.go:97] node "test-preload-310966" hosting pod "coredns-6d4b75cb6d-ll72b" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:25:59.146126   45576 pod_ready.go:81] duration metric: took 4.11703ms for pod "coredns-6d4b75cb6d-ll72b" in "kube-system" namespace to be "Ready" ...
	E0729 11:25:59.146136   45576 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-310966" hosting pod "coredns-6d4b75cb6d-ll72b" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:25:59.146144   45576 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:25:59.149683   45576 pod_ready.go:97] node "test-preload-310966" hosting pod "etcd-test-preload-310966" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:25:59.149701   45576 pod_ready.go:81] duration metric: took 3.549374ms for pod "etcd-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	E0729 11:25:59.149710   45576 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-310966" hosting pod "etcd-test-preload-310966" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:25:59.149718   45576 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:25:59.336367   45576 pod_ready.go:97] node "test-preload-310966" hosting pod "kube-apiserver-test-preload-310966" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:25:59.336396   45576 pod_ready.go:81] duration metric: took 186.668016ms for pod "kube-apiserver-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	E0729 11:25:59.336408   45576 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-310966" hosting pod "kube-apiserver-test-preload-310966" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:25:59.336416   45576 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:25:59.739808   45576 pod_ready.go:97] node "test-preload-310966" hosting pod "kube-controller-manager-test-preload-310966" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:25:59.739839   45576 pod_ready.go:81] duration metric: took 403.412032ms for pod "kube-controller-manager-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	E0729 11:25:59.739848   45576 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-310966" hosting pod "kube-controller-manager-test-preload-310966" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:25:59.739854   45576 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-82zpt" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:00.137793   45576 pod_ready.go:97] node "test-preload-310966" hosting pod "kube-proxy-82zpt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:26:00.137822   45576 pod_ready.go:81] duration metric: took 397.960323ms for pod "kube-proxy-82zpt" in "kube-system" namespace to be "Ready" ...
	E0729 11:26:00.137831   45576 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-310966" hosting pod "kube-proxy-82zpt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:26:00.137836   45576 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:00.536319   45576 pod_ready.go:97] node "test-preload-310966" hosting pod "kube-scheduler-test-preload-310966" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:26:00.536346   45576 pod_ready.go:81] duration metric: took 398.50209ms for pod "kube-scheduler-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	E0729 11:26:00.536357   45576 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-310966" hosting pod "kube-scheduler-test-preload-310966" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-310966" has status "Ready":"False"
	I0729 11:26:00.536365   45576 pod_ready.go:38] duration metric: took 1.404176432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:26:00.536384   45576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:26:00.548711   45576 ops.go:34] apiserver oom_adj: -16
	I0729 11:26:00.548731   45576 kubeadm.go:597] duration metric: took 9.338837194s to restartPrimaryControlPlane
	I0729 11:26:00.548740   45576 kubeadm.go:394] duration metric: took 9.392440557s to StartCluster
	I0729 11:26:00.548754   45576 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:26:00.548837   45576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:26:00.549597   45576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:26:00.549850   45576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:26:00.549887   45576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:26:00.549963   45576 addons.go:69] Setting storage-provisioner=true in profile "test-preload-310966"
	I0729 11:26:00.549986   45576 addons.go:69] Setting default-storageclass=true in profile "test-preload-310966"
	I0729 11:26:00.550007   45576 addons.go:234] Setting addon storage-provisioner=true in "test-preload-310966"
	I0729 11:26:00.550012   45576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-310966"
	W0729 11:26:00.550015   45576 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:26:00.550042   45576 host.go:66] Checking if "test-preload-310966" exists ...
	I0729 11:26:00.550092   45576 config.go:182] Loaded profile config "test-preload-310966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 11:26:00.550325   45576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:26:00.550356   45576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:26:00.550374   45576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:26:00.550462   45576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:26:00.551542   45576 out.go:177] * Verifying Kubernetes components...
	I0729 11:26:00.552951   45576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:26:00.565213   45576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45223
	I0729 11:26:00.565371   45576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I0729 11:26:00.565638   45576 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:26:00.565708   45576 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:26:00.566126   45576 main.go:141] libmachine: Using API Version  1
	I0729 11:26:00.566136   45576 main.go:141] libmachine: Using API Version  1
	I0729 11:26:00.566152   45576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:26:00.566154   45576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:26:00.566485   45576 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:26:00.566513   45576 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:26:00.566728   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetState
	I0729 11:26:00.567025   45576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:26:00.567065   45576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:26:00.568949   45576 kapi.go:59] client config for test-preload-310966: &rest.Config{Host:"https://192.168.39.84:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/client.crt", KeyFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/profiles/test-preload-310966/client.key", CAFile:"/home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 11:26:00.569227   45576 addons.go:234] Setting addon default-storageclass=true in "test-preload-310966"
	W0729 11:26:00.569243   45576 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:26:00.569270   45576 host.go:66] Checking if "test-preload-310966" exists ...
	I0729 11:26:00.569542   45576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:26:00.569577   45576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:26:00.582001   45576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I0729 11:26:00.582493   45576 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:26:00.583037   45576 main.go:141] libmachine: Using API Version  1
	I0729 11:26:00.583064   45576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:26:00.583277   45576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I0729 11:26:00.583388   45576 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:26:00.583567   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetState
	I0729 11:26:00.583604   45576 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:26:00.584058   45576 main.go:141] libmachine: Using API Version  1
	I0729 11:26:00.584080   45576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:26:00.584423   45576 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:26:00.585029   45576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:26:00.585075   45576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:26:00.585246   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:26:00.587124   45576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:26:00.588391   45576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:26:00.588403   45576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:26:00.588418   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:26:00.590917   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:26:00.591536   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:26:00.591563   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:26:00.591761   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:26:00.591996   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:26:00.592163   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:26:00.592323   45576 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/test-preload-310966/id_rsa Username:docker}
	I0729 11:26:00.603466   45576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I0729 11:26:00.603917   45576 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:26:00.604453   45576 main.go:141] libmachine: Using API Version  1
	I0729 11:26:00.604473   45576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:26:00.604818   45576 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:26:00.605020   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetState
	I0729 11:26:00.606902   45576 main.go:141] libmachine: (test-preload-310966) Calling .DriverName
	I0729 11:26:00.607160   45576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:26:00.607175   45576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:26:00.607192   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHHostname
	I0729 11:26:00.609992   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:26:00.610454   45576 main.go:141] libmachine: (test-preload-310966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:87:77", ip: ""} in network mk-test-preload-310966: {Iface:virbr1 ExpiryTime:2024-07-29 12:25:28 +0000 UTC Type:0 Mac:52:54:00:8c:87:77 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:test-preload-310966 Clientid:01:52:54:00:8c:87:77}
	I0729 11:26:00.610477   45576 main.go:141] libmachine: (test-preload-310966) DBG | domain test-preload-310966 has defined IP address 192.168.39.84 and MAC address 52:54:00:8c:87:77 in network mk-test-preload-310966
	I0729 11:26:00.610726   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHPort
	I0729 11:26:00.610913   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHKeyPath
	I0729 11:26:00.611078   45576 main.go:141] libmachine: (test-preload-310966) Calling .GetSSHUsername
	I0729 11:26:00.611196   45576 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/test-preload-310966/id_rsa Username:docker}
	I0729 11:26:00.746148   45576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:26:00.764745   45576 node_ready.go:35] waiting up to 6m0s for node "test-preload-310966" to be "Ready" ...
	I0729 11:26:00.825432   45576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:26:00.940355   45576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:26:01.803542   45576 main.go:141] libmachine: Making call to close driver server
	I0729 11:26:01.803574   45576 main.go:141] libmachine: (test-preload-310966) Calling .Close
	I0729 11:26:01.803862   45576 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:26:01.803885   45576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:26:01.803919   45576 main.go:141] libmachine: (test-preload-310966) DBG | Closing plugin on server side
	I0729 11:26:01.803983   45576 main.go:141] libmachine: Making call to close driver server
	I0729 11:26:01.803998   45576 main.go:141] libmachine: (test-preload-310966) Calling .Close
	I0729 11:26:01.804275   45576 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:26:01.804295   45576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:26:01.804302   45576 main.go:141] libmachine: (test-preload-310966) DBG | Closing plugin on server side
	I0729 11:26:01.813334   45576 main.go:141] libmachine: Making call to close driver server
	I0729 11:26:01.813352   45576 main.go:141] libmachine: (test-preload-310966) Calling .Close
	I0729 11:26:01.813601   45576 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:26:01.813618   45576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:26:01.831856   45576 main.go:141] libmachine: Making call to close driver server
	I0729 11:26:01.831889   45576 main.go:141] libmachine: (test-preload-310966) Calling .Close
	I0729 11:26:01.832171   45576 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:26:01.832210   45576 main.go:141] libmachine: (test-preload-310966) DBG | Closing plugin on server side
	I0729 11:26:01.832234   45576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:26:01.832249   45576 main.go:141] libmachine: Making call to close driver server
	I0729 11:26:01.832256   45576 main.go:141] libmachine: (test-preload-310966) Calling .Close
	I0729 11:26:01.832459   45576 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:26:01.832472   45576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:26:01.832488   45576 main.go:141] libmachine: (test-preload-310966) DBG | Closing plugin on server side
	I0729 11:26:01.834345   45576 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 11:26:01.835543   45576 addons.go:510] duration metric: took 1.285664159s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 11:26:02.768465   45576 node_ready.go:53] node "test-preload-310966" has status "Ready":"False"
	I0729 11:26:04.769199   45576 node_ready.go:53] node "test-preload-310966" has status "Ready":"False"
	I0729 11:26:06.769869   45576 node_ready.go:53] node "test-preload-310966" has status "Ready":"False"
	I0729 11:26:08.269892   45576 node_ready.go:49] node "test-preload-310966" has status "Ready":"True"
	I0729 11:26:08.269919   45576 node_ready.go:38] duration metric: took 7.505140056s for node "test-preload-310966" to be "Ready" ...
	I0729 11:26:08.269929   45576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:26:08.276195   45576 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-7k4pb" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:08.281393   45576 pod_ready.go:92] pod "coredns-6d4b75cb6d-7k4pb" in "kube-system" namespace has status "Ready":"True"
	I0729 11:26:08.281414   45576 pod_ready.go:81] duration metric: took 5.192919ms for pod "coredns-6d4b75cb6d-7k4pb" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:08.281426   45576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:08.791757   45576 pod_ready.go:92] pod "etcd-test-preload-310966" in "kube-system" namespace has status "Ready":"True"
	I0729 11:26:08.791781   45576 pod_ready.go:81] duration metric: took 510.347244ms for pod "etcd-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:08.791793   45576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:08.797749   45576 pod_ready.go:92] pod "kube-apiserver-test-preload-310966" in "kube-system" namespace has status "Ready":"True"
	I0729 11:26:08.797774   45576 pod_ready.go:81] duration metric: took 5.972221ms for pod "kube-apiserver-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:08.797788   45576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:08.803133   45576 pod_ready.go:92] pod "kube-controller-manager-test-preload-310966" in "kube-system" namespace has status "Ready":"True"
	I0729 11:26:08.803155   45576 pod_ready.go:81] duration metric: took 5.358453ms for pod "kube-controller-manager-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:08.803167   45576 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-82zpt" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:09.069912   45576 pod_ready.go:92] pod "kube-proxy-82zpt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:26:09.069937   45576 pod_ready.go:81] duration metric: took 266.763851ms for pod "kube-proxy-82zpt" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:09.069946   45576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:09.469019   45576 pod_ready.go:92] pod "kube-scheduler-test-preload-310966" in "kube-system" namespace has status "Ready":"True"
	I0729 11:26:09.469040   45576 pod_ready.go:81] duration metric: took 399.087865ms for pod "kube-scheduler-test-preload-310966" in "kube-system" namespace to be "Ready" ...
	I0729 11:26:09.469050   45576 pod_ready.go:38] duration metric: took 1.199109143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:26:09.469063   45576 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:26:09.469123   45576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:26:09.484662   45576 api_server.go:72] duration metric: took 8.93476979s to wait for apiserver process to appear ...
	I0729 11:26:09.484691   45576 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:26:09.484709   45576 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0729 11:26:09.490034   45576 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0729 11:26:09.491023   45576 api_server.go:141] control plane version: v1.24.4
	I0729 11:26:09.491047   45576 api_server.go:131] duration metric: took 6.349406ms to wait for apiserver health ...
	I0729 11:26:09.491055   45576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:26:09.672006   45576 system_pods.go:59] 7 kube-system pods found
	I0729 11:26:09.672032   45576 system_pods.go:61] "coredns-6d4b75cb6d-7k4pb" [6ee6e009-c299-48ae-ac4c-cb2df34a9ce4] Running
	I0729 11:26:09.672036   45576 system_pods.go:61] "etcd-test-preload-310966" [90d3968b-aa91-4e6e-8ef3-35946188f78d] Running
	I0729 11:26:09.672040   45576 system_pods.go:61] "kube-apiserver-test-preload-310966" [0205b89b-4ca6-41d0-82a0-4c5dbcb0b95f] Running
	I0729 11:26:09.672044   45576 system_pods.go:61] "kube-controller-manager-test-preload-310966" [1eaa49ea-1069-4a14-af34-34b317b2f937] Running
	I0729 11:26:09.672055   45576 system_pods.go:61] "kube-proxy-82zpt" [dd873001-dd0f-4f80-9bf5-f0d4550aa1c1] Running
	I0729 11:26:09.672058   45576 system_pods.go:61] "kube-scheduler-test-preload-310966" [52acc999-a2fe-4f83-ab1c-feaa8e31ebb4] Running
	I0729 11:26:09.672061   45576 system_pods.go:61] "storage-provisioner" [38ca7a34-defc-45f6-924e-afa308acfab7] Running
	I0729 11:26:09.672068   45576 system_pods.go:74] duration metric: took 181.007361ms to wait for pod list to return data ...
	I0729 11:26:09.672077   45576 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:26:09.870004   45576 default_sa.go:45] found service account: "default"
	I0729 11:26:09.870030   45576 default_sa.go:55] duration metric: took 197.946471ms for default service account to be created ...
	I0729 11:26:09.870038   45576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:26:10.072095   45576 system_pods.go:86] 7 kube-system pods found
	I0729 11:26:10.072126   45576 system_pods.go:89] "coredns-6d4b75cb6d-7k4pb" [6ee6e009-c299-48ae-ac4c-cb2df34a9ce4] Running
	I0729 11:26:10.072133   45576 system_pods.go:89] "etcd-test-preload-310966" [90d3968b-aa91-4e6e-8ef3-35946188f78d] Running
	I0729 11:26:10.072140   45576 system_pods.go:89] "kube-apiserver-test-preload-310966" [0205b89b-4ca6-41d0-82a0-4c5dbcb0b95f] Running
	I0729 11:26:10.072149   45576 system_pods.go:89] "kube-controller-manager-test-preload-310966" [1eaa49ea-1069-4a14-af34-34b317b2f937] Running
	I0729 11:26:10.072153   45576 system_pods.go:89] "kube-proxy-82zpt" [dd873001-dd0f-4f80-9bf5-f0d4550aa1c1] Running
	I0729 11:26:10.072156   45576 system_pods.go:89] "kube-scheduler-test-preload-310966" [52acc999-a2fe-4f83-ab1c-feaa8e31ebb4] Running
	I0729 11:26:10.072160   45576 system_pods.go:89] "storage-provisioner" [38ca7a34-defc-45f6-924e-afa308acfab7] Running
	I0729 11:26:10.072166   45576 system_pods.go:126] duration metric: took 202.123256ms to wait for k8s-apps to be running ...
	I0729 11:26:10.072172   45576 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:26:10.072230   45576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:26:10.088044   45576 system_svc.go:56] duration metric: took 15.863472ms WaitForService to wait for kubelet
	I0729 11:26:10.088077   45576 kubeadm.go:582] duration metric: took 9.538200304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:26:10.088094   45576 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:26:10.270371   45576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:26:10.270395   45576 node_conditions.go:123] node cpu capacity is 2
	I0729 11:26:10.270404   45576 node_conditions.go:105] duration metric: took 182.305893ms to run NodePressure ...
	I0729 11:26:10.270415   45576 start.go:241] waiting for startup goroutines ...
	I0729 11:26:10.270421   45576 start.go:246] waiting for cluster config update ...
	I0729 11:26:10.270429   45576 start.go:255] writing updated cluster config ...
	I0729 11:26:10.270722   45576 ssh_runner.go:195] Run: rm -f paused
	I0729 11:26:10.316943   45576 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0729 11:26:10.319019   45576 out.go:177] 
	W0729 11:26:10.320408   45576 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0729 11:26:10.321861   45576 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0729 11:26:10.323229   45576 out.go:177] * Done! kubectl is now configured to use "test-preload-310966" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.242578700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252371242517393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6033b69-8850-46e7-bea5-7846afe88afd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.243795983Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00a3f163-bddd-40bc-a8e1-f8cb7f999dd8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.243849853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00a3f163-bddd-40bc-a8e1-f8cb7f999dd8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.244003752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7add86d5f286c7852fd7d62189d456386618f90ddbc31ebd7624cc0c7fc7dac,PodSandboxId:e2f787c593799cc576793754b0351cc33b4be91f03d18525a528b4666476e566,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722252366861264635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7k4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee6e009-c299-48ae-ac4c-cb2df34a9ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4f3f75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a39a7f24089e3bb011908542be226aec93e4734d8f0b23f06d5c8c662805f32c,PodSandboxId:d6ab6b560957259811295a1ebf2cf1bc1a8b0f80ba0461183a0febe768462112,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722252359644946297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-82zpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dd873001-dd0f-4f80-9bf5-f0d4550aa1c1,},Annotations:map[string]string{io.kubernetes.container.hash: 19d3a9c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc887b86f6d0525631043e5d9e681af064f9c9b852e2090f5242c4aa4e38608,PodSandboxId:145757e3ba889088d35d122d6c529828c8e586834267f91c3626a963809bcbe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722252359341306720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38
ca7a34-defc-45f6-924e-afa308acfab7,},Annotations:map[string]string{io.kubernetes.container.hash: e3bb55d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97cb062f3a72baaaaf9ca3b00a910772b92bb38afc4e034415f781209e6ce800,PodSandboxId:3280b2b667f5177edda0b22709e59a8811e728ad05cd36915f9d6a67c98c0c18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722252353129773252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f5afffdd90699250954faf9e90cc75a0,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50359814b215dd7dd0433147c1ea192b1dc3f71d4bf6468291aa242cc9507752,PodSandboxId:9ab881807add47f37a778841150406793cb7d0fa1481ffa490ee25ddc2272223,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722252353083464354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef193d19e7d475ad039b17b
54dfe8b1,},Annotations:map[string]string{io.kubernetes.container.hash: 718defb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc14195cf5174c6cb72cffdcfc5857ec4519e5e12860392b04d408bf197354cd,PodSandboxId:6d4d74b5d4c8d1036906bc43b62e65cce4ecbdfcbc165754918fa404483e8291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722252353109466653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09322b31c18a359fc21658b9a919eb6,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45e255d0456659b488ab0b50c7a3d31bda2c9a80cd027365edbb1748c15afc9,PodSandboxId:ee6419182072f9deea02b5bdfa9eed189377f9fe97e32626e5707aabbaf1c507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722252353017542389,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3339f2190d3512f465c2675af6dec25,},Annotations
:map[string]string{io.kubernetes.container.hash: 99e4bd67,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00a3f163-bddd-40bc-a8e1-f8cb7f999dd8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.283855231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1510ff9c-dd36-47dd-befb-633301def58c name=/runtime.v1.RuntimeService/Version
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.283929623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1510ff9c-dd36-47dd-befb-633301def58c name=/runtime.v1.RuntimeService/Version
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.284938771Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c95c79a-404f-4299-9e77-6133ac39a0fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.285677407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252371285649556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c95c79a-404f-4299-9e77-6133ac39a0fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.286590267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=188343aa-411b-4abe-b157-d8978b9f134e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.286640995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=188343aa-411b-4abe-b157-d8978b9f134e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.286801154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7add86d5f286c7852fd7d62189d456386618f90ddbc31ebd7624cc0c7fc7dac,PodSandboxId:e2f787c593799cc576793754b0351cc33b4be91f03d18525a528b4666476e566,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722252366861264635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7k4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee6e009-c299-48ae-ac4c-cb2df34a9ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4f3f75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a39a7f24089e3bb011908542be226aec93e4734d8f0b23f06d5c8c662805f32c,PodSandboxId:d6ab6b560957259811295a1ebf2cf1bc1a8b0f80ba0461183a0febe768462112,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722252359644946297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-82zpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dd873001-dd0f-4f80-9bf5-f0d4550aa1c1,},Annotations:map[string]string{io.kubernetes.container.hash: 19d3a9c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc887b86f6d0525631043e5d9e681af064f9c9b852e2090f5242c4aa4e38608,PodSandboxId:145757e3ba889088d35d122d6c529828c8e586834267f91c3626a963809bcbe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722252359341306720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38
ca7a34-defc-45f6-924e-afa308acfab7,},Annotations:map[string]string{io.kubernetes.container.hash: e3bb55d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97cb062f3a72baaaaf9ca3b00a910772b92bb38afc4e034415f781209e6ce800,PodSandboxId:3280b2b667f5177edda0b22709e59a8811e728ad05cd36915f9d6a67c98c0c18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722252353129773252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f5afffdd90699250954faf9e90cc75a0,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50359814b215dd7dd0433147c1ea192b1dc3f71d4bf6468291aa242cc9507752,PodSandboxId:9ab881807add47f37a778841150406793cb7d0fa1481ffa490ee25ddc2272223,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722252353083464354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef193d19e7d475ad039b17b
54dfe8b1,},Annotations:map[string]string{io.kubernetes.container.hash: 718defb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc14195cf5174c6cb72cffdcfc5857ec4519e5e12860392b04d408bf197354cd,PodSandboxId:6d4d74b5d4c8d1036906bc43b62e65cce4ecbdfcbc165754918fa404483e8291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722252353109466653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09322b31c18a359fc21658b9a919eb6,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45e255d0456659b488ab0b50c7a3d31bda2c9a80cd027365edbb1748c15afc9,PodSandboxId:ee6419182072f9deea02b5bdfa9eed189377f9fe97e32626e5707aabbaf1c507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722252353017542389,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3339f2190d3512f465c2675af6dec25,},Annotations
:map[string]string{io.kubernetes.container.hash: 99e4bd67,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=188343aa-411b-4abe-b157-d8978b9f134e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.332662523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8d05f18-c931-4578-92d5-2f3d91cdea76 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.332759576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8d05f18-c931-4578-92d5-2f3d91cdea76 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.334052094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebc939f8-c2e2-4461-aab7-31c3b4f7eaa3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.334672673Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252371334638168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebc939f8-c2e2-4461-aab7-31c3b4f7eaa3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.335202142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1d6fbf0-b7d8-40b0-a558-5a314d1d7010 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.335251680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1d6fbf0-b7d8-40b0-a558-5a314d1d7010 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.335622090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7add86d5f286c7852fd7d62189d456386618f90ddbc31ebd7624cc0c7fc7dac,PodSandboxId:e2f787c593799cc576793754b0351cc33b4be91f03d18525a528b4666476e566,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722252366861264635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7k4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee6e009-c299-48ae-ac4c-cb2df34a9ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4f3f75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a39a7f24089e3bb011908542be226aec93e4734d8f0b23f06d5c8c662805f32c,PodSandboxId:d6ab6b560957259811295a1ebf2cf1bc1a8b0f80ba0461183a0febe768462112,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722252359644946297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-82zpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dd873001-dd0f-4f80-9bf5-f0d4550aa1c1,},Annotations:map[string]string{io.kubernetes.container.hash: 19d3a9c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc887b86f6d0525631043e5d9e681af064f9c9b852e2090f5242c4aa4e38608,PodSandboxId:145757e3ba889088d35d122d6c529828c8e586834267f91c3626a963809bcbe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722252359341306720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38
ca7a34-defc-45f6-924e-afa308acfab7,},Annotations:map[string]string{io.kubernetes.container.hash: e3bb55d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97cb062f3a72baaaaf9ca3b00a910772b92bb38afc4e034415f781209e6ce800,PodSandboxId:3280b2b667f5177edda0b22709e59a8811e728ad05cd36915f9d6a67c98c0c18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722252353129773252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f5afffdd90699250954faf9e90cc75a0,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50359814b215dd7dd0433147c1ea192b1dc3f71d4bf6468291aa242cc9507752,PodSandboxId:9ab881807add47f37a778841150406793cb7d0fa1481ffa490ee25ddc2272223,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722252353083464354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef193d19e7d475ad039b17b
54dfe8b1,},Annotations:map[string]string{io.kubernetes.container.hash: 718defb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc14195cf5174c6cb72cffdcfc5857ec4519e5e12860392b04d408bf197354cd,PodSandboxId:6d4d74b5d4c8d1036906bc43b62e65cce4ecbdfcbc165754918fa404483e8291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722252353109466653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09322b31c18a359fc21658b9a919eb6,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45e255d0456659b488ab0b50c7a3d31bda2c9a80cd027365edbb1748c15afc9,PodSandboxId:ee6419182072f9deea02b5bdfa9eed189377f9fe97e32626e5707aabbaf1c507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722252353017542389,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3339f2190d3512f465c2675af6dec25,},Annotations
:map[string]string{io.kubernetes.container.hash: 99e4bd67,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1d6fbf0-b7d8-40b0-a558-5a314d1d7010 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.375004197Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=523ad049-907e-4af4-9b1a-1c64e2767aee name=/runtime.v1.RuntimeService/Version
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.375141949Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=523ad049-907e-4af4-9b1a-1c64e2767aee name=/runtime.v1.RuntimeService/Version
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.376447407Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5e4a3d5-f89c-4b50-b4b1-857fdbd3329f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.376871675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252371376851121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5e4a3d5-f89c-4b50-b4b1-857fdbd3329f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.377623890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02e0f61d-1872-4956-b25e-635b13e36c72 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.377694690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02e0f61d-1872-4956-b25e-635b13e36c72 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:26:11 test-preload-310966 crio[698]: time="2024-07-29 11:26:11.377869122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7add86d5f286c7852fd7d62189d456386618f90ddbc31ebd7624cc0c7fc7dac,PodSandboxId:e2f787c593799cc576793754b0351cc33b4be91f03d18525a528b4666476e566,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722252366861264635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7k4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee6e009-c299-48ae-ac4c-cb2df34a9ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4f3f75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a39a7f24089e3bb011908542be226aec93e4734d8f0b23f06d5c8c662805f32c,PodSandboxId:d6ab6b560957259811295a1ebf2cf1bc1a8b0f80ba0461183a0febe768462112,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722252359644946297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-82zpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dd873001-dd0f-4f80-9bf5-f0d4550aa1c1,},Annotations:map[string]string{io.kubernetes.container.hash: 19d3a9c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc887b86f6d0525631043e5d9e681af064f9c9b852e2090f5242c4aa4e38608,PodSandboxId:145757e3ba889088d35d122d6c529828c8e586834267f91c3626a963809bcbe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722252359341306720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38
ca7a34-defc-45f6-924e-afa308acfab7,},Annotations:map[string]string{io.kubernetes.container.hash: e3bb55d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97cb062f3a72baaaaf9ca3b00a910772b92bb38afc4e034415f781209e6ce800,PodSandboxId:3280b2b667f5177edda0b22709e59a8811e728ad05cd36915f9d6a67c98c0c18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722252353129773252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f5afffdd90699250954faf9e90cc75a0,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50359814b215dd7dd0433147c1ea192b1dc3f71d4bf6468291aa242cc9507752,PodSandboxId:9ab881807add47f37a778841150406793cb7d0fa1481ffa490ee25ddc2272223,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722252353083464354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef193d19e7d475ad039b17b
54dfe8b1,},Annotations:map[string]string{io.kubernetes.container.hash: 718defb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc14195cf5174c6cb72cffdcfc5857ec4519e5e12860392b04d408bf197354cd,PodSandboxId:6d4d74b5d4c8d1036906bc43b62e65cce4ecbdfcbc165754918fa404483e8291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722252353109466653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09322b31c18a359fc21658b9a919eb6,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45e255d0456659b488ab0b50c7a3d31bda2c9a80cd027365edbb1748c15afc9,PodSandboxId:ee6419182072f9deea02b5bdfa9eed189377f9fe97e32626e5707aabbaf1c507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722252353017542389,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-310966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3339f2190d3512f465c2675af6dec25,},Annotations
:map[string]string{io.kubernetes.container.hash: 99e4bd67,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02e0f61d-1872-4956-b25e-635b13e36c72 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e7add86d5f286       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   e2f787c593799       coredns-6d4b75cb6d-7k4pb
	a39a7f24089e3       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   11 seconds ago      Running             kube-proxy                1                   d6ab6b5609572       kube-proxy-82zpt
	6bc887b86f6d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   145757e3ba889       storage-provisioner
	97cb062f3a72b       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   3280b2b667f51       kube-controller-manager-test-preload-310966
	dc14195cf5174       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   6d4d74b5d4c8d       kube-scheduler-test-preload-310966
	50359814b215d       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   9ab881807add4       etcd-test-preload-310966
	b45e255d04566       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   ee6419182072f       kube-apiserver-test-preload-310966
	
	
	==> coredns [e7add86d5f286c7852fd7d62189d456386618f90ddbc31ebd7624cc0c7fc7dac] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:47467 - 48571 "HINFO IN 6797763808802371238.6775981375337291404. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014775542s
	
	
	==> describe nodes <==
	Name:               test-preload-310966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-310966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=test-preload-310966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_24_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:24:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-310966
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:26:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:26:07 +0000   Mon, 29 Jul 2024 11:24:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:26:07 +0000   Mon, 29 Jul 2024 11:24:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:26:07 +0000   Mon, 29 Jul 2024 11:24:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:26:07 +0000   Mon, 29 Jul 2024 11:26:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    test-preload-310966
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4357fbeb34d14e169f79da96dc95f9f2
	  System UUID:                4357fbeb-34d1-4e16-9f79-da96dc95f9f2
	  Boot ID:                    2e4dd765-a2d1-45e4-a025-10ef651957cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7k4pb                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     82s
	  kube-system                 etcd-test-preload-310966                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                 kube-apiserver-test-preload-310966             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-test-preload-310966    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-82zpt                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-scheduler-test-preload-310966             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  Starting                 81s                kube-proxy       
	  Normal  Starting                 95s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  95s                kubelet          Node test-preload-310966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s                kubelet          Node test-preload-310966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s                kubelet          Node test-preload-310966 status is now: NodeHasSufficientPID
	  Normal  NodeReady                85s                kubelet          Node test-preload-310966 status is now: NodeReady
	  Normal  RegisteredNode           83s                node-controller  Node test-preload-310966 event: Registered Node test-preload-310966 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-310966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-310966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-310966 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node test-preload-310966 event: Registered Node test-preload-310966 in Controller
	
	
	==> dmesg <==
	[Jul29 11:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050600] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040120] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.801383] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.528815] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.570413] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.675659] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.061560] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062117] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.186169] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.120271] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.298765] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[ +11.845694] systemd-fstab-generator[957]: Ignoring "noauto" option for root device
	[  +0.056318] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.518445] systemd-fstab-generator[1087]: Ignoring "noauto" option for root device
	[  +6.552791] kauditd_printk_skb: 105 callbacks suppressed
	[Jul29 11:26] systemd-fstab-generator[1712]: Ignoring "noauto" option for root device
	[  +6.034501] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [50359814b215dd7dd0433147c1ea192b1dc3f71d4bf6468291aa242cc9507752] <==
	{"level":"info","ts":"2024-07-29T11:25:53.574Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"9759e6b18ded37f5","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T11:25:53.575Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T11:25:53.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9759e6b18ded37f5 switched to configuration voters=(10906001622919100405)"}
	{"level":"info","ts":"2024-07-29T11:25:53.578Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5f38fc1d36b986e7","local-member-id":"9759e6b18ded37f5","added-peer-id":"9759e6b18ded37f5","added-peer-peer-urls":["https://192.168.39.84:2380"]}
	{"level":"info","ts":"2024-07-29T11:25:53.578Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5f38fc1d36b986e7","local-member-id":"9759e6b18ded37f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:25:53.578Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:25:53.581Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T11:25:53.583Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9759e6b18ded37f5","initial-advertise-peer-urls":["https://192.168.39.84:2380"],"listen-peer-urls":["https://192.168.39.84:2380"],"advertise-client-urls":["https://192.168.39.84:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.84:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:25:53.583Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.84:2380"}
	{"level":"info","ts":"2024-07-29T11:25:53.583Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.84:2380"}
	{"level":"info","ts":"2024-07-29T11:25:53.583Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T11:25:55.136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9759e6b18ded37f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T11:25:55.136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9759e6b18ded37f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:25:55.136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9759e6b18ded37f5 received MsgPreVoteResp from 9759e6b18ded37f5 at term 2"}
	{"level":"info","ts":"2024-07-29T11:25:55.136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9759e6b18ded37f5 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T11:25:55.136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9759e6b18ded37f5 received MsgVoteResp from 9759e6b18ded37f5 at term 3"}
	{"level":"info","ts":"2024-07-29T11:25:55.136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9759e6b18ded37f5 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T11:25:55.136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9759e6b18ded37f5 elected leader 9759e6b18ded37f5 at term 3"}
	{"level":"info","ts":"2024-07-29T11:25:55.136Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9759e6b18ded37f5","local-member-attributes":"{Name:test-preload-310966 ClientURLs:[https://192.168.39.84:2379]}","request-path":"/0/members/9759e6b18ded37f5/attributes","cluster-id":"5f38fc1d36b986e7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:25:55.136Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:25:55.138Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T11:25:55.139Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:25:55.140Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.84:2379"}
	{"level":"info","ts":"2024-07-29T11:25:55.146Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:25:55.146Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:26:11 up 0 min,  0 users,  load average: 0.53, 0.15, 0.05
	Linux test-preload-310966 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b45e255d0456659b488ab0b50c7a3d31bda2c9a80cd027365edbb1748c15afc9] <==
	I0729 11:25:57.498377       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0729 11:25:57.498389       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 11:25:57.498404       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 11:25:57.501640       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0729 11:25:57.541409       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0729 11:25:57.544031       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 11:25:57.641292       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 11:25:57.641503       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0729 11:25:57.647253       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0729 11:25:57.672804       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 11:25:57.686032       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 11:25:57.691384       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 11:25:57.693276       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 11:25:57.693537       1 cache.go:39] Caches are synced for autoregister controller
	I0729 11:25:57.698859       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 11:25:58.137797       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 11:25:58.495733       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 11:25:59.044639       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 11:25:59.060833       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 11:25:59.093613       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 11:25:59.108518       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 11:25:59.113681       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 11:25:59.923231       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0729 11:26:10.274650       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 11:26:10.424230       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [97cb062f3a72baaaaf9ca3b00a910772b92bb38afc4e034415f781209e6ce800] <==
	I0729 11:26:10.269870       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0729 11:26:10.269951       1 shared_informer.go:262] Caches are synced for deployment
	I0729 11:26:10.272612       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0729 11:26:10.276616       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 11:26:10.284562       1 shared_informer.go:262] Caches are synced for stateful set
	I0729 11:26:10.297244       1 shared_informer.go:262] Caches are synced for ephemeral
	I0729 11:26:10.307693       1 shared_informer.go:262] Caches are synced for disruption
	I0729 11:26:10.307781       1 disruption.go:371] Sending events to api server.
	I0729 11:26:10.347965       1 shared_informer.go:262] Caches are synced for taint
	I0729 11:26:10.348191       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0729 11:26:10.348349       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-310966. Assuming now as a timestamp.
	I0729 11:26:10.348413       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0729 11:26:10.348670       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 11:26:10.348859       1 event.go:294] "Event occurred" object="test-preload-310966" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-310966 event: Registered Node test-preload-310966 in Controller"
	I0729 11:26:10.348980       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0729 11:26:10.366595       1 shared_informer.go:262] Caches are synced for cronjob
	I0729 11:26:10.373669       1 shared_informer.go:262] Caches are synced for daemon sets
	I0729 11:26:10.380045       1 shared_informer.go:262] Caches are synced for job
	I0729 11:26:10.393690       1 shared_informer.go:262] Caches are synced for crt configmap
	I0729 11:26:10.419944       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0729 11:26:10.471939       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 11:26:10.491002       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 11:26:10.912339       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 11:26:10.943990       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 11:26:10.944037       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [a39a7f24089e3bb011908542be226aec93e4734d8f0b23f06d5c8c662805f32c] <==
	I0729 11:25:59.876950       1 node.go:163] Successfully retrieved node IP: 192.168.39.84
	I0729 11:25:59.877384       1 server_others.go:138] "Detected node IP" address="192.168.39.84"
	I0729 11:25:59.877455       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 11:25:59.912567       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 11:25:59.912642       1 server_others.go:206] "Using iptables Proxier"
	I0729 11:25:59.913266       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 11:25:59.914220       1 server.go:661] "Version info" version="v1.24.4"
	I0729 11:25:59.914393       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:25:59.916179       1 config.go:317] "Starting service config controller"
	I0729 11:25:59.916800       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 11:25:59.916899       1 config.go:444] "Starting node config controller"
	I0729 11:25:59.916957       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 11:25:59.917572       1 config.go:226] "Starting endpoint slice config controller"
	I0729 11:25:59.917600       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 11:26:00.017167       1 shared_informer.go:262] Caches are synced for service config
	I0729 11:26:00.017044       1 shared_informer.go:262] Caches are synced for node config
	I0729 11:26:00.017797       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [dc14195cf5174c6cb72cffdcfc5857ec4519e5e12860392b04d408bf197354cd] <==
	I0729 11:25:54.248316       1 serving.go:348] Generated self-signed cert in-memory
	W0729 11:25:57.551423       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 11:25:57.551575       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:25:57.551689       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 11:25:57.551719       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 11:25:57.618009       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0729 11:25:57.618045       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:25:57.628194       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0729 11:25:57.628890       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 11:25:57.633005       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:25:57.634973       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 11:25:57.733763       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.403442    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ee6e009-c299-48ae-ac4c-cb2df34a9ce4-config-volume\") pod \"coredns-6d4b75cb6d-7k4pb\" (UID: \"6ee6e009-c299-48ae-ac4c-cb2df34a9ce4\") " pod="kube-system/coredns-6d4b75cb6d-7k4pb"
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.403461    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glxrp\" (UniqueName: \"kubernetes.io/projected/6ee6e009-c299-48ae-ac4c-cb2df34a9ce4-kube-api-access-glxrp\") pod \"coredns-6d4b75cb6d-7k4pb\" (UID: \"6ee6e009-c299-48ae-ac4c-cb2df34a9ce4\") " pod="kube-system/coredns-6d4b75cb6d-7k4pb"
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.403488    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/38ca7a34-defc-45f6-924e-afa308acfab7-tmp\") pod \"storage-provisioner\" (UID: \"38ca7a34-defc-45f6-924e-afa308acfab7\") " pod="kube-system/storage-provisioner"
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.403505    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd873001-dd0f-4f80-9bf5-f0d4550aa1c1-kube-proxy\") pod \"kube-proxy-82zpt\" (UID: \"dd873001-dd0f-4f80-9bf5-f0d4550aa1c1\") " pod="kube-system/kube-proxy-82zpt"
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.403535    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd873001-dd0f-4f80-9bf5-f0d4550aa1c1-xtables-lock\") pod \"kube-proxy-82zpt\" (UID: \"dd873001-dd0f-4f80-9bf5-f0d4550aa1c1\") " pod="kube-system/kube-proxy-82zpt"
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.403553    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd873001-dd0f-4f80-9bf5-f0d4550aa1c1-lib-modules\") pod \"kube-proxy-82zpt\" (UID: \"dd873001-dd0f-4f80-9bf5-f0d4550aa1c1\") " pod="kube-system/kube-proxy-82zpt"
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.403573    1094 reconciler.go:159] "Reconciler: start to sync state"
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.871557    1094 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8644ef3-fd75-4c62-8abd-47fb4c02884e-config-volume\") pod \"a8644ef3-fd75-4c62-8abd-47fb4c02884e\" (UID: \"a8644ef3-fd75-4c62-8abd-47fb4c02884e\") "
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.871759    1094 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngcmn\" (UniqueName: \"kubernetes.io/projected/a8644ef3-fd75-4c62-8abd-47fb4c02884e-kube-api-access-ngcmn\") pod \"a8644ef3-fd75-4c62-8abd-47fb4c02884e\" (UID: \"a8644ef3-fd75-4c62-8abd-47fb4c02884e\") "
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: W0729 11:25:58.873008    1094 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/a8644ef3-fd75-4c62-8abd-47fb4c02884e/volumes/kubernetes.io~projected/kube-api-access-ngcmn: clearQuota called, but quotas disabled
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: W0729 11:25:58.873256    1094 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/a8644ef3-fd75-4c62-8abd-47fb4c02884e/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.873365    1094 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8644ef3-fd75-4c62-8abd-47fb4c02884e-kube-api-access-ngcmn" (OuterVolumeSpecName: "kube-api-access-ngcmn") pod "a8644ef3-fd75-4c62-8abd-47fb4c02884e" (UID: "a8644ef3-fd75-4c62-8abd-47fb4c02884e"). InnerVolumeSpecName "kube-api-access-ngcmn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: E0729 11:25:58.873529    1094 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: E0729 11:25:58.873638    1094 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6ee6e009-c299-48ae-ac4c-cb2df34a9ce4-config-volume podName:6ee6e009-c299-48ae-ac4c-cb2df34a9ce4 nodeName:}" failed. No retries permitted until 2024-07-29 11:25:59.373607273 +0000 UTC m=+7.175446460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6ee6e009-c299-48ae-ac4c-cb2df34a9ce4-config-volume") pod "coredns-6d4b75cb6d-7k4pb" (UID: "6ee6e009-c299-48ae-ac4c-cb2df34a9ce4") : object "kube-system"/"coredns" not registered
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.874235    1094 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8644ef3-fd75-4c62-8abd-47fb4c02884e-config-volume" (OuterVolumeSpecName: "config-volume") pod "a8644ef3-fd75-4c62-8abd-47fb4c02884e" (UID: "a8644ef3-fd75-4c62-8abd-47fb4c02884e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.972984    1094 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8644ef3-fd75-4c62-8abd-47fb4c02884e-config-volume\") on node \"test-preload-310966\" DevicePath \"\""
	Jul 29 11:25:58 test-preload-310966 kubelet[1094]: I0729 11:25:58.973037    1094 reconciler.go:384] "Volume detached for volume \"kube-api-access-ngcmn\" (UniqueName: \"kubernetes.io/projected/a8644ef3-fd75-4c62-8abd-47fb4c02884e-kube-api-access-ngcmn\") on node \"test-preload-310966\" DevicePath \"\""
	Jul 29 11:25:59 test-preload-310966 kubelet[1094]: E0729 11:25:59.374968    1094 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 11:25:59 test-preload-310966 kubelet[1094]: E0729 11:25:59.375056    1094 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6ee6e009-c299-48ae-ac4c-cb2df34a9ce4-config-volume podName:6ee6e009-c299-48ae-ac4c-cb2df34a9ce4 nodeName:}" failed. No retries permitted until 2024-07-29 11:26:00.375036384 +0000 UTC m=+8.176875585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6ee6e009-c299-48ae-ac4c-cb2df34a9ce4-config-volume") pod "coredns-6d4b75cb6d-7k4pb" (UID: "6ee6e009-c299-48ae-ac4c-cb2df34a9ce4") : object "kube-system"/"coredns" not registered
	Jul 29 11:26:00 test-preload-310966 kubelet[1094]: E0729 11:26:00.380661    1094 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 11:26:00 test-preload-310966 kubelet[1094]: E0729 11:26:00.380725    1094 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6ee6e009-c299-48ae-ac4c-cb2df34a9ce4-config-volume podName:6ee6e009-c299-48ae-ac4c-cb2df34a9ce4 nodeName:}" failed. No retries permitted until 2024-07-29 11:26:02.380711261 +0000 UTC m=+10.182550448 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6ee6e009-c299-48ae-ac4c-cb2df34a9ce4-config-volume") pod "coredns-6d4b75cb6d-7k4pb" (UID: "6ee6e009-c299-48ae-ac4c-cb2df34a9ce4") : object "kube-system"/"coredns" not registered
	Jul 29 11:26:00 test-preload-310966 kubelet[1094]: E0729 11:26:00.431404    1094 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7k4pb" podUID=6ee6e009-c299-48ae-ac4c-cb2df34a9ce4
	Jul 29 11:26:00 test-preload-310966 kubelet[1094]: I0729 11:26:00.436403    1094 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a8644ef3-fd75-4c62-8abd-47fb4c02884e path="/var/lib/kubelet/pods/a8644ef3-fd75-4c62-8abd-47fb4c02884e/volumes"
	Jul 29 11:26:02 test-preload-310966 kubelet[1094]: E0729 11:26:02.395577    1094 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 11:26:02 test-preload-310966 kubelet[1094]: E0729 11:26:02.396306    1094 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6ee6e009-c299-48ae-ac4c-cb2df34a9ce4-config-volume podName:6ee6e009-c299-48ae-ac4c-cb2df34a9ce4 nodeName:}" failed. No retries permitted until 2024-07-29 11:26:06.396284516 +0000 UTC m=+14.198123703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6ee6e009-c299-48ae-ac4c-cb2df34a9ce4-config-volume") pod "coredns-6d4b75cb6d-7k4pb" (UID: "6ee6e009-c299-48ae-ac4c-cb2df34a9ce4") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [6bc887b86f6d0525631043e5d9e681af064f9c9b852e2090f5242c4aa4e38608] <==
	I0729 11:25:59.426212       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-310966 -n test-preload-310966
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-310966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-310966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-310966
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-310966: (1.124743477s)
--- FAIL: TestPreload (176.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302301 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-302301 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.119402696s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-302301] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-302301" primary control-plane node in "kubernetes-upgrade-302301" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:28:04.956828   47109 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:28:04.956967   47109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:28:04.956978   47109 out.go:304] Setting ErrFile to fd 2...
	I0729 11:28:04.956984   47109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:28:04.957281   47109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:28:04.957964   47109 out.go:298] Setting JSON to false
	I0729 11:28:04.959063   47109 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4231,"bootTime":1722248254,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:28:04.959120   47109 start.go:139] virtualization: kvm guest
	I0729 11:28:04.962110   47109 out.go:177] * [kubernetes-upgrade-302301] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:28:04.964345   47109 notify.go:220] Checking for updates...
	I0729 11:28:04.966025   47109 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:28:04.967602   47109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:28:04.969498   47109 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:28:04.970984   47109 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:28:04.974533   47109 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:28:04.976601   47109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:28:04.978762   47109 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:28:05.017553   47109 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 11:28:05.018730   47109 start.go:297] selected driver: kvm2
	I0729 11:28:05.018749   47109 start.go:901] validating driver "kvm2" against <nil>
	I0729 11:28:05.018763   47109 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:28:05.019642   47109 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:28:05.019736   47109 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:28:05.034741   47109 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:28:05.034796   47109 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:28:05.035025   47109 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 11:28:05.035051   47109 cni.go:84] Creating CNI manager for ""
	I0729 11:28:05.035062   47109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:28:05.035086   47109 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 11:28:05.035157   47109 start.go:340] cluster config:
	{Name:kubernetes-upgrade-302301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-302301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:28:05.035272   47109 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:28:05.037783   47109 out.go:177] * Starting "kubernetes-upgrade-302301" primary control-plane node in "kubernetes-upgrade-302301" cluster
	I0729 11:28:05.039025   47109 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:28:05.039062   47109 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 11:28:05.039082   47109 cache.go:56] Caching tarball of preloaded images
	I0729 11:28:05.039150   47109 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:28:05.039161   47109 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 11:28:05.039444   47109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/config.json ...
	I0729 11:28:05.039468   47109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/config.json: {Name:mk3fdd22adfc18bc78bc62a6f74c67944d2c7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:28:05.039641   47109 start.go:360] acquireMachinesLock for kubernetes-upgrade-302301: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:28:05.039686   47109 start.go:364] duration metric: took 28.297µs to acquireMachinesLock for "kubernetes-upgrade-302301"
	I0729 11:28:05.039713   47109 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-302301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-302301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:28:05.039821   47109 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 11:28:05.042192   47109 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:28:05.042324   47109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:28:05.042369   47109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:28:05.059080   47109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44315
	I0729 11:28:05.059568   47109 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:28:05.060234   47109 main.go:141] libmachine: Using API Version  1
	I0729 11:28:05.060275   47109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:28:05.060692   47109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:28:05.060891   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetMachineName
	I0729 11:28:05.061077   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .DriverName
	I0729 11:28:05.061265   47109 start.go:159] libmachine.API.Create for "kubernetes-upgrade-302301" (driver="kvm2")
	I0729 11:28:05.061318   47109 client.go:168] LocalClient.Create starting
	I0729 11:28:05.061358   47109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 11:28:05.061403   47109 main.go:141] libmachine: Decoding PEM data...
	I0729 11:28:05.061426   47109 main.go:141] libmachine: Parsing certificate...
	I0729 11:28:05.061504   47109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 11:28:05.061531   47109 main.go:141] libmachine: Decoding PEM data...
	I0729 11:28:05.061552   47109 main.go:141] libmachine: Parsing certificate...
	I0729 11:28:05.061574   47109 main.go:141] libmachine: Running pre-create checks...
	I0729 11:28:05.061592   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .PreCreateCheck
	I0729 11:28:05.062118   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetConfigRaw
	I0729 11:28:05.062586   47109 main.go:141] libmachine: Creating machine...
	I0729 11:28:05.062603   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .Create
	I0729 11:28:05.062764   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Creating KVM machine...
	I0729 11:28:05.064224   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found existing default KVM network
	I0729 11:28:05.064998   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:05.064826   47188 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015300}
	I0729 11:28:05.065034   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | created network xml: 
	I0729 11:28:05.065047   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | <network>
	I0729 11:28:05.065062   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG |   <name>mk-kubernetes-upgrade-302301</name>
	I0729 11:28:05.065075   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG |   <dns enable='no'/>
	I0729 11:28:05.065089   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG |   
	I0729 11:28:05.065111   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 11:28:05.065121   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG |     <dhcp>
	I0729 11:28:05.065134   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 11:28:05.065149   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG |     </dhcp>
	I0729 11:28:05.065163   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG |   </ip>
	I0729 11:28:05.065173   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG |   
	I0729 11:28:05.065202   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | </network>
	I0729 11:28:05.065212   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | 
	I0729 11:28:05.071043   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | trying to create private KVM network mk-kubernetes-upgrade-302301 192.168.39.0/24...
	I0729 11:28:05.145381   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | private KVM network mk-kubernetes-upgrade-302301 192.168.39.0/24 created
	I0729 11:28:05.145430   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:05.145347   47188 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:28:05.145447   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301 ...
	I0729 11:28:05.145469   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 11:28:05.145484   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 11:28:05.394790   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:05.394642   47188 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301/id_rsa...
	I0729 11:28:05.768558   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:05.768410   47188 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301/kubernetes-upgrade-302301.rawdisk...
	I0729 11:28:05.768595   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Writing magic tar header
	I0729 11:28:05.768613   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Writing SSH key tar header
	I0729 11:28:05.768628   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:05.768514   47188 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301 ...
	I0729 11:28:05.768642   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301 (perms=drwx------)
	I0729 11:28:05.768655   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301
	I0729 11:28:05.768665   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 11:28:05.768681   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:28:05.768697   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 11:28:05.768717   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 11:28:05.768727   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 11:28:05.768739   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Checking permissions on dir: /home/jenkins
	I0729 11:28:05.768750   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Checking permissions on dir: /home
	I0729 11:28:05.768775   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 11:28:05.768788   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Skipping /home - not owner
	I0729 11:28:05.768818   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 11:28:05.768848   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 11:28:05.768864   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 11:28:05.768874   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Creating domain...
	I0729 11:28:05.769873   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) define libvirt domain using xml: 
	I0729 11:28:05.769910   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) <domain type='kvm'>
	I0729 11:28:05.769924   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   <name>kubernetes-upgrade-302301</name>
	I0729 11:28:05.769936   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   <memory unit='MiB'>2200</memory>
	I0729 11:28:05.769943   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   <vcpu>2</vcpu>
	I0729 11:28:05.769948   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   <features>
	I0729 11:28:05.769958   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <acpi/>
	I0729 11:28:05.769969   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <apic/>
	I0729 11:28:05.769978   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <pae/>
	I0729 11:28:05.769993   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     
	I0729 11:28:05.770010   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   </features>
	I0729 11:28:05.770020   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   <cpu mode='host-passthrough'>
	I0729 11:28:05.770029   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   
	I0729 11:28:05.770038   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   </cpu>
	I0729 11:28:05.770056   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   <os>
	I0729 11:28:05.770078   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <type>hvm</type>
	I0729 11:28:05.770088   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <boot dev='cdrom'/>
	I0729 11:28:05.770099   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <boot dev='hd'/>
	I0729 11:28:05.770110   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <bootmenu enable='no'/>
	I0729 11:28:05.770124   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   </os>
	I0729 11:28:05.770136   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   <devices>
	I0729 11:28:05.770147   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <disk type='file' device='cdrom'>
	I0729 11:28:05.770164   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301/boot2docker.iso'/>
	I0729 11:28:05.770175   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <target dev='hdc' bus='scsi'/>
	I0729 11:28:05.770187   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <readonly/>
	I0729 11:28:05.770197   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     </disk>
	I0729 11:28:05.770214   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <disk type='file' device='disk'>
	I0729 11:28:05.770237   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 11:28:05.770258   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301/kubernetes-upgrade-302301.rawdisk'/>
	I0729 11:28:05.770304   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <target dev='hda' bus='virtio'/>
	I0729 11:28:05.770321   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     </disk>
	I0729 11:28:05.770344   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <interface type='network'>
	I0729 11:28:05.770358   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <source network='mk-kubernetes-upgrade-302301'/>
	I0729 11:28:05.770370   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <model type='virtio'/>
	I0729 11:28:05.770380   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     </interface>
	I0729 11:28:05.770398   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <interface type='network'>
	I0729 11:28:05.770428   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <source network='default'/>
	I0729 11:28:05.770441   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <model type='virtio'/>
	I0729 11:28:05.770448   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     </interface>
	I0729 11:28:05.770460   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <serial type='pty'>
	I0729 11:28:05.770471   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <target port='0'/>
	I0729 11:28:05.770479   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     </serial>
	I0729 11:28:05.770493   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <console type='pty'>
	I0729 11:28:05.770505   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <target type='serial' port='0'/>
	I0729 11:28:05.770515   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     </console>
	I0729 11:28:05.770524   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     <rng model='virtio'>
	I0729 11:28:05.770535   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)       <backend model='random'>/dev/random</backend>
	I0729 11:28:05.770545   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     </rng>
	I0729 11:28:05.770554   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     
	I0729 11:28:05.770561   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)     
	I0729 11:28:05.770573   47109 main.go:141] libmachine: (kubernetes-upgrade-302301)   </devices>
	I0729 11:28:05.770582   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) </domain>
	I0729 11:28:05.770592   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) 
	I0729 11:28:05.774661   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:3d:61:45 in network default
	I0729 11:28:05.775206   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Ensuring networks are active...
	I0729 11:28:05.775230   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:05.775961   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Ensuring network default is active
	I0729 11:28:05.776291   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Ensuring network mk-kubernetes-upgrade-302301 is active
	I0729 11:28:05.776826   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Getting domain xml...
	I0729 11:28:05.777551   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Creating domain...
	I0729 11:28:07.098777   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Waiting to get IP...
	I0729 11:28:07.099603   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:07.100102   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:07.100130   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:07.100012   47188 retry.go:31] will retry after 225.086292ms: waiting for machine to come up
	I0729 11:28:07.326466   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:07.327007   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:07.327028   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:07.326905   47188 retry.go:31] will retry after 377.825627ms: waiting for machine to come up
	I0729 11:28:07.706512   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:07.706968   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:07.706993   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:07.706922   47188 retry.go:31] will retry after 356.809843ms: waiting for machine to come up
	I0729 11:28:08.065413   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:08.065810   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:08.065840   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:08.065767   47188 retry.go:31] will retry after 448.674966ms: waiting for machine to come up
	I0729 11:28:08.516350   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:08.516769   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:08.516794   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:08.516717   47188 retry.go:31] will retry after 554.984954ms: waiting for machine to come up
	I0729 11:28:09.073101   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:09.073560   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:09.073588   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:09.073527   47188 retry.go:31] will retry after 673.893052ms: waiting for machine to come up
	I0729 11:28:09.749047   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:09.749554   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:09.749582   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:09.749518   47188 retry.go:31] will retry after 1.02791671s: waiting for machine to come up
	I0729 11:28:10.778888   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:10.779195   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:10.779222   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:10.779157   47188 retry.go:31] will retry after 911.337746ms: waiting for machine to come up
	I0729 11:28:11.692013   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:11.692442   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:11.692464   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:11.692409   47188 retry.go:31] will retry after 1.269149072s: waiting for machine to come up
	I0729 11:28:12.963820   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:12.964229   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:12.964251   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:12.964180   47188 retry.go:31] will retry after 1.562204941s: waiting for machine to come up
	I0729 11:28:14.529010   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:14.529644   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:14.529673   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:14.529567   47188 retry.go:31] will retry after 1.957939419s: waiting for machine to come up
	I0729 11:28:16.489433   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:16.489796   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:16.489819   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:16.489757   47188 retry.go:31] will retry after 2.357223203s: waiting for machine to come up
	I0729 11:28:18.850433   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:18.850857   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:18.850879   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:18.850815   47188 retry.go:31] will retry after 2.948792981s: waiting for machine to come up
	I0729 11:28:21.802312   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:21.802786   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find current IP address of domain kubernetes-upgrade-302301 in network mk-kubernetes-upgrade-302301
	I0729 11:28:21.802812   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | I0729 11:28:21.802748   47188 retry.go:31] will retry after 5.679085704s: waiting for machine to come up
	I0729 11:28:27.485064   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.485520   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Found IP for machine: 192.168.39.51
	I0729 11:28:27.485549   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has current primary IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.485559   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Reserving static IP address...
	I0729 11:28:27.485908   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-302301", mac: "52:54:00:5c:c7:46", ip: "192.168.39.51"} in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.559689   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Getting to WaitForSSH function...
	I0729 11:28:27.559721   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Reserved static IP address: 192.168.39.51
	I0729 11:28:27.559765   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Waiting for SSH to be available...
	I0729 11:28:27.562433   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.562863   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:27.562916   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.563107   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Using SSH client type: external
	I0729 11:28:27.563134   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301/id_rsa (-rw-------)
	I0729 11:28:27.563181   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:28:27.563201   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | About to run SSH command:
	I0729 11:28:27.563217   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | exit 0
	I0729 11:28:27.691003   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | SSH cmd err, output: <nil>: 
	I0729 11:28:27.691281   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) KVM machine creation complete!
	I0729 11:28:27.691577   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetConfigRaw
	I0729 11:28:27.692095   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .DriverName
	I0729 11:28:27.692274   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .DriverName
	I0729 11:28:27.692414   47109 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:28:27.692426   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetState
	I0729 11:28:27.693612   47109 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:28:27.693629   47109 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:28:27.693637   47109 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:28:27.693645   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:27.696114   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.696473   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:27.696499   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.696669   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHPort
	I0729 11:28:27.696851   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:27.697026   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:27.697179   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHUsername
	I0729 11:28:27.697413   47109 main.go:141] libmachine: Using SSH client type: native
	I0729 11:28:27.697615   47109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0729 11:28:27.697628   47109 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:28:27.810343   47109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:28:27.810368   47109 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:28:27.810376   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:27.813338   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.813648   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:27.813697   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.813815   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHPort
	I0729 11:28:27.814005   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:27.814161   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:27.814374   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHUsername
	I0729 11:28:27.814508   47109 main.go:141] libmachine: Using SSH client type: native
	I0729 11:28:27.814759   47109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0729 11:28:27.814774   47109 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:28:27.927348   47109 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:28:27.927412   47109 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:28:27.927422   47109 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:28:27.927434   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetMachineName
	I0729 11:28:27.927703   47109 buildroot.go:166] provisioning hostname "kubernetes-upgrade-302301"
	I0729 11:28:27.927728   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetMachineName
	I0729 11:28:27.927939   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:27.930298   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.930632   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:27.930661   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:27.930825   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHPort
	I0729 11:28:27.930989   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:27.931190   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:27.931384   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHUsername
	I0729 11:28:27.931551   47109 main.go:141] libmachine: Using SSH client type: native
	I0729 11:28:27.931717   47109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0729 11:28:27.931728   47109 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-302301 && echo "kubernetes-upgrade-302301" | sudo tee /etc/hostname
	I0729 11:28:28.057752   47109 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-302301
	
	I0729 11:28:28.057782   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:28.060219   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.060576   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:28.060603   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.060731   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHPort
	I0729 11:28:28.060911   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:28.061084   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:28.061233   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHUsername
	I0729 11:28:28.061442   47109 main.go:141] libmachine: Using SSH client type: native
	I0729 11:28:28.061658   47109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0729 11:28:28.061682   47109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-302301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-302301/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-302301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:28:28.183911   47109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:28:28.183953   47109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:28:28.183981   47109 buildroot.go:174] setting up certificates
	I0729 11:28:28.183992   47109 provision.go:84] configureAuth start
	I0729 11:28:28.184006   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetMachineName
	I0729 11:28:28.184309   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetIP
	I0729 11:28:28.186800   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.187175   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:28.187218   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.187341   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:28.189490   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.189786   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:28.189813   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.189929   47109 provision.go:143] copyHostCerts
	I0729 11:28:28.189985   47109 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:28:28.189998   47109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:28:28.190097   47109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:28:28.190215   47109 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:28:28.190226   47109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:28:28.190266   47109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:28:28.190346   47109 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:28:28.190356   47109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:28:28.190388   47109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:28:28.190451   47109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-302301 san=[127.0.0.1 192.168.39.51 kubernetes-upgrade-302301 localhost minikube]
	I0729 11:28:28.291091   47109 provision.go:177] copyRemoteCerts
	I0729 11:28:28.291158   47109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:28:28.291185   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:28.293713   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.293998   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:28.294024   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.294243   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHPort
	I0729 11:28:28.294415   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:28.294529   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHUsername
	I0729 11:28:28.294646   47109 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301/id_rsa Username:docker}
	I0729 11:28:28.382391   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:28:28.407227   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 11:28:28.430618   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:28:28.454600   47109 provision.go:87] duration metric: took 270.595696ms to configureAuth
	I0729 11:28:28.454626   47109 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:28:28.454836   47109 config.go:182] Loaded profile config "kubernetes-upgrade-302301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:28:28.454920   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:28.457464   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.457784   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:28.457803   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.458013   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHPort
	I0729 11:28:28.458227   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:28.458396   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:28.458525   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHUsername
	I0729 11:28:28.458713   47109 main.go:141] libmachine: Using SSH client type: native
	I0729 11:28:28.458935   47109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0729 11:28:28.458950   47109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:28:28.739962   47109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:28:28.739992   47109 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:28:28.740002   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetURL
	I0729 11:28:28.741257   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | Using libvirt version 6000000
	I0729 11:28:28.743342   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.743661   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:28.743695   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.743845   47109 main.go:141] libmachine: Docker is up and running!
	I0729 11:28:28.743861   47109 main.go:141] libmachine: Reticulating splines...
	I0729 11:28:28.743868   47109 client.go:171] duration metric: took 23.682542316s to LocalClient.Create
	I0729 11:28:28.743891   47109 start.go:167] duration metric: took 23.682627685s to libmachine.API.Create "kubernetes-upgrade-302301"
	I0729 11:28:28.743901   47109 start.go:293] postStartSetup for "kubernetes-upgrade-302301" (driver="kvm2")
	I0729 11:28:28.743910   47109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:28:28.743932   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .DriverName
	I0729 11:28:28.744165   47109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:28:28.744211   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:28.746335   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.746690   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:28.746745   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.746893   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHPort
	I0729 11:28:28.747055   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:28.747280   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHUsername
	I0729 11:28:28.747433   47109 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301/id_rsa Username:docker}
	I0729 11:28:28.833978   47109 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:28:28.838251   47109 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:28:28.838271   47109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:28:28.838323   47109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:28:28.838392   47109 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:28:28.838468   47109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:28:28.848396   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:28:28.872010   47109 start.go:296] duration metric: took 128.096019ms for postStartSetup
	I0729 11:28:28.872055   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetConfigRaw
	I0729 11:28:28.872600   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetIP
	I0729 11:28:28.875289   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.875598   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:28.875634   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.875905   47109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/config.json ...
	I0729 11:28:28.876132   47109 start.go:128] duration metric: took 23.836302624s to createHost
	I0729 11:28:28.876156   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:28.878133   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.878424   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:28.878454   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.878603   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHPort
	I0729 11:28:28.878781   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:28.878953   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:28.879123   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHUsername
	I0729 11:28:28.879261   47109 main.go:141] libmachine: Using SSH client type: native
	I0729 11:28:28.879462   47109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0729 11:28:28.879478   47109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 11:28:28.992231   47109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252508.964818511
	
	I0729 11:28:28.992258   47109 fix.go:216] guest clock: 1722252508.964818511
	I0729 11:28:28.992267   47109 fix.go:229] Guest: 2024-07-29 11:28:28.964818511 +0000 UTC Remote: 2024-07-29 11:28:28.876146417 +0000 UTC m=+23.964904823 (delta=88.672094ms)
	I0729 11:28:28.992294   47109 fix.go:200] guest clock delta is within tolerance: 88.672094ms
	I0729 11:28:28.992307   47109 start.go:83] releasing machines lock for "kubernetes-upgrade-302301", held for 23.952605395s
	I0729 11:28:28.992349   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .DriverName
	I0729 11:28:28.992612   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetIP
	I0729 11:28:28.995375   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.995755   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:28.995786   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.995905   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .DriverName
	I0729 11:28:28.996416   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .DriverName
	I0729 11:28:28.996619   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .DriverName
	I0729 11:28:28.996734   47109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:28:28.996779   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:28.996851   47109 ssh_runner.go:195] Run: cat /version.json
	I0729 11:28:28.996884   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHHostname
	I0729 11:28:28.999636   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:28.999797   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:29.000076   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:29.000104   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:29.000222   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:29.000252   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:29.000347   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHPort
	I0729 11:28:29.000401   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHPort
	I0729 11:28:29.000491   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:29.000554   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHKeyPath
	I0729 11:28:29.000613   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHUsername
	I0729 11:28:29.000673   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetSSHUsername
	I0729 11:28:29.000744   47109 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301/id_rsa Username:docker}
	I0729 11:28:29.000900   47109 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kubernetes-upgrade-302301/id_rsa Username:docker}
	I0729 11:28:29.089530   47109 ssh_runner.go:195] Run: systemctl --version
	I0729 11:28:29.113511   47109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:28:29.289613   47109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:28:29.298581   47109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:28:29.298672   47109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:28:29.320348   47109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:28:29.320371   47109 start.go:495] detecting cgroup driver to use...
	I0729 11:28:29.320433   47109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:28:29.337651   47109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:28:29.352699   47109 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:28:29.352766   47109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:28:29.367501   47109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:28:29.384289   47109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:28:29.527002   47109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:28:29.684987   47109 docker.go:233] disabling docker service ...
	I0729 11:28:29.685177   47109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:28:29.702011   47109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:28:29.716080   47109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:28:29.868742   47109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:28:29.992564   47109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:28:30.007363   47109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:28:30.029028   47109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 11:28:30.029090   47109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:28:30.041676   47109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:28:30.041734   47109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:28:30.052854   47109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:28:30.064195   47109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:28:30.075029   47109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:28:30.087928   47109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:28:30.099430   47109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:28:30.099489   47109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:28:30.115470   47109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:28:30.127741   47109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:28:30.253147   47109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:28:30.395189   47109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:28:30.395260   47109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:28:30.400769   47109 start.go:563] Will wait 60s for crictl version
	I0729 11:28:30.400822   47109 ssh_runner.go:195] Run: which crictl
	I0729 11:28:30.404844   47109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:28:30.444915   47109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:28:30.445034   47109 ssh_runner.go:195] Run: crio --version
	I0729 11:28:30.474452   47109 ssh_runner.go:195] Run: crio --version
	I0729 11:28:30.507670   47109 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 11:28:30.509209   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) Calling .GetIP
	I0729 11:28:30.512462   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:30.512942   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:c7:46", ip: ""} in network mk-kubernetes-upgrade-302301: {Iface:virbr1 ExpiryTime:2024-07-29 12:28:20 +0000 UTC Type:0 Mac:52:54:00:5c:c7:46 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-302301 Clientid:01:52:54:00:5c:c7:46}
	I0729 11:28:30.512974   47109 main.go:141] libmachine: (kubernetes-upgrade-302301) DBG | domain kubernetes-upgrade-302301 has defined IP address 192.168.39.51 and MAC address 52:54:00:5c:c7:46 in network mk-kubernetes-upgrade-302301
	I0729 11:28:30.513230   47109 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:28:30.517936   47109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:28:30.536027   47109 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-302301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-302301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:28:30.536147   47109 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:28:30.536219   47109 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:28:30.572830   47109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:28:30.572927   47109 ssh_runner.go:195] Run: which lz4
	I0729 11:28:30.577104   47109 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 11:28:30.581531   47109 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:28:30.581562   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 11:28:32.303591   47109 crio.go:462] duration metric: took 1.72652659s to copy over tarball
	I0729 11:28:32.303667   47109 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:28:34.964305   47109 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.660605304s)
	I0729 11:28:34.964335   47109 crio.go:469] duration metric: took 2.660715484s to extract the tarball
	I0729 11:28:34.964344   47109 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:28:35.007563   47109 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:28:35.058107   47109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:28:35.058136   47109 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:28:35.058235   47109 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:28:35.058258   47109 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 11:28:35.058262   47109 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 11:28:35.058206   47109 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:28:35.058281   47109 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:28:35.058262   47109 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:28:35.058318   47109 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:28:35.058237   47109 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:28:35.059750   47109 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:28:35.059796   47109 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:28:35.059804   47109 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:28:35.059823   47109 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 11:28:35.059750   47109 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:28:35.059948   47109 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 11:28:35.060162   47109 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:28:35.060190   47109 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:28:35.245343   47109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:28:35.260605   47109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 11:28:35.261079   47109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:28:35.305647   47109 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 11:28:35.305692   47109 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:28:35.305743   47109 ssh_runner.go:195] Run: which crictl
	I0729 11:28:35.331046   47109 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 11:28:35.331087   47109 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 11:28:35.331136   47109 ssh_runner.go:195] Run: which crictl
	I0729 11:28:35.346256   47109 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 11:28:35.346307   47109 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:28:35.346331   47109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:28:35.346349   47109 ssh_runner.go:195] Run: which crictl
	I0729 11:28:35.346399   47109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 11:28:35.397884   47109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:28:35.397952   47109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 11:28:35.397980   47109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 11:28:35.401856   47109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 11:28:35.412458   47109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:28:35.420568   47109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 11:28:35.433020   47109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:28:35.446856   47109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 11:28:35.500377   47109 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 11:28:35.500411   47109 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:28:35.500458   47109 ssh_runner.go:195] Run: which crictl
	I0729 11:28:35.507032   47109 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 11:28:35.507082   47109 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:28:35.507141   47109 ssh_runner.go:195] Run: which crictl
	I0729 11:28:35.529455   47109 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 11:28:35.529497   47109 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 11:28:35.529541   47109 ssh_runner.go:195] Run: which crictl
	I0729 11:28:35.542607   47109 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 11:28:35.542647   47109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 11:28:35.542662   47109 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:28:35.542691   47109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:28:35.542729   47109 ssh_runner.go:195] Run: which crictl
	I0729 11:28:35.542753   47109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 11:28:35.620767   47109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 11:28:35.620781   47109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 11:28:35.620829   47109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:28:35.620883   47109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 11:28:35.655541   47109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 11:28:35.950292   47109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:28:36.094401   47109 cache_images.go:92] duration metric: took 1.03623102s to LoadCachedImages
	W0729 11:28:36.094534   47109 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0729 11:28:36.094555   47109 kubeadm.go:934] updating node { 192.168.39.51 8443 v1.20.0 crio true true} ...
	I0729 11:28:36.094715   47109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-302301 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-302301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:28:36.094815   47109 ssh_runner.go:195] Run: crio config
	I0729 11:28:36.144644   47109 cni.go:84] Creating CNI manager for ""
	I0729 11:28:36.144689   47109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:28:36.144707   47109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:28:36.144733   47109 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-302301 NodeName:kubernetes-upgrade-302301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 11:28:36.144905   47109 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-302301"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:28:36.144988   47109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 11:28:36.155730   47109 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:28:36.155793   47109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:28:36.165981   47109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0729 11:28:36.185190   47109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:28:36.204039   47109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 11:28:36.221567   47109 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I0729 11:28:36.225596   47109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:28:36.238492   47109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:28:36.399838   47109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:28:36.421272   47109 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301 for IP: 192.168.39.51
	I0729 11:28:36.421293   47109 certs.go:194] generating shared ca certs ...
	I0729 11:28:36.421307   47109 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:28:36.421460   47109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:28:36.421504   47109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:28:36.421514   47109 certs.go:256] generating profile certs ...
	I0729 11:28:36.421574   47109 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/client.key
	I0729 11:28:36.421591   47109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/client.crt with IP's: []
	I0729 11:28:36.532441   47109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/client.crt ...
	I0729 11:28:36.532479   47109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/client.crt: {Name:mk7d0ed15ea1209a5efe22be4a1d8aa07cddfb71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:28:36.532674   47109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/client.key ...
	I0729 11:28:36.532690   47109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/client.key: {Name:mk57fce70c4eb7a4f84242a1b444a6e1211ad9e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:28:36.532779   47109 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.key.bd723dad
	I0729 11:28:36.532803   47109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.crt.bd723dad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.51]
	I0729 11:28:36.693120   47109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.crt.bd723dad ...
	I0729 11:28:36.693152   47109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.crt.bd723dad: {Name:mk4d83b7fa81e895292ccf80a2c4e7d13af9947d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:28:36.693321   47109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.key.bd723dad ...
	I0729 11:28:36.693338   47109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.key.bd723dad: {Name:mk336331a9ca1658801c499d9cd5d9b89aef6c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:28:36.693428   47109 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.crt.bd723dad -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.crt
	I0729 11:28:36.693526   47109 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.key.bd723dad -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.key
	I0729 11:28:36.693610   47109 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.key
	I0729 11:28:36.693631   47109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.crt with IP's: []
	I0729 11:28:36.991358   47109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.crt ...
	I0729 11:28:36.991390   47109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.crt: {Name:mke5baa304f73fc7137d65df6d8e426d30b4b35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:28:36.991577   47109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.key ...
	I0729 11:28:36.991595   47109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.key: {Name:mkd5a8bb87a8392ce4fee8d467a8719452a78ade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:28:36.991788   47109 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:28:36.991830   47109 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:28:36.991857   47109 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:28:36.991882   47109 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:28:36.991906   47109 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:28:36.991925   47109 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:28:36.991961   47109 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:28:36.992548   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:28:37.024076   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:28:37.049678   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:28:37.075520   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:28:37.100035   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 11:28:37.125265   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:28:37.149556   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:28:37.174216   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:28:37.198839   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:28:37.222710   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:28:37.249674   47109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:28:37.273067   47109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:28:37.293016   47109 ssh_runner.go:195] Run: openssl version
	I0729 11:28:37.311599   47109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:28:37.325559   47109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:28:37.332283   47109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:28:37.332364   47109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:28:37.341129   47109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:28:37.357923   47109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:28:37.372811   47109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:28:37.377436   47109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:28:37.377501   47109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:28:37.383310   47109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:28:37.398497   47109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:28:37.413496   47109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:28:37.418236   47109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:28:37.418293   47109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:28:37.423994   47109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:28:37.435389   47109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:28:37.439537   47109 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:28:37.439586   47109 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-302301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-302301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:28:37.439646   47109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:28:37.439688   47109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:28:37.483164   47109 cri.go:89] found id: ""
	I0729 11:28:37.483232   47109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:28:37.493854   47109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:28:37.506482   47109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:28:37.516811   47109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:28:37.516828   47109 kubeadm.go:157] found existing configuration files:
	
	I0729 11:28:37.516874   47109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:28:37.526364   47109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:28:37.526434   47109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:28:37.536385   47109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:28:37.545950   47109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:28:37.546009   47109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:28:37.556309   47109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:28:37.565862   47109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:28:37.565917   47109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:28:37.575955   47109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:28:37.585900   47109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:28:37.585967   47109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:28:37.597490   47109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:28:37.727172   47109 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:28:37.727241   47109 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:28:37.881116   47109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:28:37.881309   47109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:28:37.881459   47109 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:28:38.070183   47109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:28:38.072213   47109 out.go:204]   - Generating certificates and keys ...
	I0729 11:28:38.072326   47109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:28:38.072415   47109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:28:38.360461   47109 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 11:28:38.526597   47109 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 11:28:38.888777   47109 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 11:28:39.134265   47109 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 11:28:39.286270   47109 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 11:28:39.286479   47109 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302301 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	I0729 11:28:39.596889   47109 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 11:28:39.597217   47109 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302301 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	I0729 11:28:39.739515   47109 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 11:28:39.857215   47109 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 11:28:40.019383   47109 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 11:28:40.019622   47109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:28:40.277631   47109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:28:40.422468   47109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:28:40.484937   47109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:28:40.622497   47109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:28:40.638959   47109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:28:40.640192   47109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:28:40.640254   47109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:28:40.772577   47109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:28:40.774409   47109 out.go:204]   - Booting up control plane ...
	I0729 11:28:40.774518   47109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:28:40.788200   47109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:28:40.789694   47109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:28:40.791098   47109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:28:40.795049   47109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:29:20.787566   47109 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:29:20.788322   47109 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:29:20.788563   47109 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:29:25.789016   47109 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:29:25.789213   47109 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:29:35.788465   47109 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:29:35.788732   47109 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:29:55.788030   47109 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:29:55.788266   47109 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:30:35.789865   47109 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:30:35.790180   47109 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:30:35.790200   47109 kubeadm.go:310] 
	I0729 11:30:35.790250   47109 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:30:35.790322   47109 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:30:35.790335   47109 kubeadm.go:310] 
	I0729 11:30:35.790387   47109 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:30:35.790448   47109 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:30:35.790598   47109 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:30:35.790614   47109 kubeadm.go:310] 
	I0729 11:30:35.790785   47109 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:30:35.790842   47109 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:30:35.790887   47109 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:30:35.790892   47109 kubeadm.go:310] 
	I0729 11:30:35.791044   47109 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:30:35.791152   47109 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:30:35.791164   47109 kubeadm.go:310] 
	I0729 11:30:35.791335   47109 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:30:35.791471   47109 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:30:35.791575   47109 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:30:35.791677   47109 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:30:35.791685   47109 kubeadm.go:310] 
	I0729 11:30:35.792169   47109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:30:35.792287   47109 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:30:35.792372   47109 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 11:30:35.792529   47109 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302301 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302301 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302301 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302301 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 11:30:35.792589   47109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:30:37.099334   47109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.306704385s)
	I0729 11:30:37.099410   47109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:30:37.120093   47109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:30:37.134645   47109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:30:37.134662   47109 kubeadm.go:157] found existing configuration files:
	
	I0729 11:30:37.134732   47109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:30:37.148985   47109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:30:37.149052   47109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:30:37.160172   47109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:30:37.170210   47109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:30:37.170287   47109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:30:37.180707   47109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:30:37.190097   47109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:30:37.190163   47109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:30:37.201257   47109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:30:37.211701   47109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:30:37.211768   47109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:30:37.224125   47109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:30:37.308337   47109 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:30:37.308472   47109 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:30:37.504941   47109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:30:37.505091   47109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:30:37.505199   47109 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:30:37.759763   47109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:30:37.762182   47109 out.go:204]   - Generating certificates and keys ...
	I0729 11:30:37.762284   47109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:30:37.762369   47109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:30:37.762517   47109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:30:37.762604   47109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:30:37.762691   47109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:30:37.762781   47109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:30:37.762860   47109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:30:37.762936   47109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:30:37.763029   47109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:30:37.763127   47109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:30:37.763178   47109 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:30:37.763249   47109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:30:38.389752   47109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:30:38.618247   47109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:30:38.921217   47109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:30:39.146379   47109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:30:39.161499   47109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:30:39.163993   47109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:30:39.164063   47109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:30:39.321412   47109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:30:39.323566   47109 out.go:204]   - Booting up control plane ...
	I0729 11:30:39.323697   47109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:30:39.339793   47109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:30:39.341293   47109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:30:39.342324   47109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:30:39.345274   47109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:31:19.348089   47109 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:31:19.348468   47109 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:31:19.348647   47109 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:31:24.349376   47109 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:31:24.349602   47109 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:31:34.350447   47109 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:31:34.350736   47109 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:31:54.349759   47109 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:31:54.350064   47109 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:32:34.349567   47109 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:32:34.349822   47109 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:32:34.349847   47109 kubeadm.go:310] 
	I0729 11:32:34.349906   47109 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:32:34.349970   47109 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:32:34.349983   47109 kubeadm.go:310] 
	I0729 11:32:34.350025   47109 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:32:34.350072   47109 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:32:34.350234   47109 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:32:34.350249   47109 kubeadm.go:310] 
	I0729 11:32:34.350393   47109 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:32:34.350455   47109 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:32:34.350501   47109 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:32:34.350510   47109 kubeadm.go:310] 
	I0729 11:32:34.350670   47109 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:32:34.350788   47109 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:32:34.350801   47109 kubeadm.go:310] 
	I0729 11:32:34.350915   47109 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:32:34.351056   47109 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:32:34.351153   47109 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:32:34.351253   47109 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:32:34.351264   47109 kubeadm.go:310] 
	I0729 11:32:34.352141   47109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:32:34.352268   47109 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:32:34.352388   47109 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 11:32:34.352457   47109 kubeadm.go:394] duration metric: took 3m56.912874537s to StartCluster
	I0729 11:32:34.352515   47109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:32:34.352567   47109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:32:34.399886   47109 cri.go:89] found id: ""
	I0729 11:32:34.399909   47109 logs.go:276] 0 containers: []
	W0729 11:32:34.399917   47109 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:32:34.399923   47109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:32:34.399982   47109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:32:34.437621   47109 cri.go:89] found id: ""
	I0729 11:32:34.437655   47109 logs.go:276] 0 containers: []
	W0729 11:32:34.437666   47109 logs.go:278] No container was found matching "etcd"
	I0729 11:32:34.437675   47109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:32:34.437734   47109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:32:34.474212   47109 cri.go:89] found id: ""
	I0729 11:32:34.474241   47109 logs.go:276] 0 containers: []
	W0729 11:32:34.474258   47109 logs.go:278] No container was found matching "coredns"
	I0729 11:32:34.474266   47109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:32:34.474326   47109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:32:34.516150   47109 cri.go:89] found id: ""
	I0729 11:32:34.516179   47109 logs.go:276] 0 containers: []
	W0729 11:32:34.516189   47109 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:32:34.516196   47109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:32:34.516260   47109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:32:34.558016   47109 cri.go:89] found id: ""
	I0729 11:32:34.558059   47109 logs.go:276] 0 containers: []
	W0729 11:32:34.558070   47109 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:32:34.558082   47109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:32:34.558143   47109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:32:34.601151   47109 cri.go:89] found id: ""
	I0729 11:32:34.601182   47109 logs.go:276] 0 containers: []
	W0729 11:32:34.601191   47109 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:32:34.601200   47109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:32:34.601264   47109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:32:34.642081   47109 cri.go:89] found id: ""
	I0729 11:32:34.642108   47109 logs.go:276] 0 containers: []
	W0729 11:32:34.642118   47109 logs.go:278] No container was found matching "kindnet"
	I0729 11:32:34.642129   47109 logs.go:123] Gathering logs for container status ...
	I0729 11:32:34.642149   47109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:32:34.686362   47109 logs.go:123] Gathering logs for kubelet ...
	I0729 11:32:34.686395   47109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:32:34.756395   47109 logs.go:123] Gathering logs for dmesg ...
	I0729 11:32:34.756431   47109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:32:34.772131   47109 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:32:34.772164   47109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:32:34.901418   47109 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:32:34.901442   47109 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:32:34.901458   47109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 11:32:35.013577   47109 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 11:32:35.013634   47109 out.go:239] * 
	* 
	W0729 11:32:35.013694   47109 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:32:35.013716   47109 out.go:239] * 
	* 
	W0729 11:32:35.014594   47109 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:32:35.017728   47109 out.go:177] 
	W0729 11:32:35.019346   47109 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:32:35.019433   47109 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 11:32:35.019463   47109 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 11:32:35.021618   47109 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-302301 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-302301
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-302301: (1.633841242s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-302301 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-302301 status --format={{.Host}}: exit status 7 (67.493232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302301 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-302301 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.545352121s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-302301 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302301 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-302301 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.57161ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-302301] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-302301
	    minikube start -p kubernetes-upgrade-302301 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3023012 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-302301 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302301 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-302301 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.762409766s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-29 11:34:30.223302924 +0000 UTC m=+4448.307031411
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-302301 -n kubernetes-upgrade-302301
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-302301 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-302301 logs -n 25: (1.85410004s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-224523              | cert-options-224523       | jenkins | v1.33.1 | 29 Jul 24 11:30 UTC | 29 Jul 24 11:30 UTC |
	| start   | -p running-upgrade-342576           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 11:30 UTC | 29 Jul 24 11:31 UTC |
	|         | --memory=2200 --vm-driver=kvm2      |                           |         |         |                     |                     |
	|         |  --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-867440           | stopped-upgrade-867440    | jenkins | v1.33.1 | 29 Jul 24 11:31 UTC | 29 Jul 24 11:31 UTC |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:31 UTC |                     |
	|         | --no-kubernetes                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20           |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:31 UTC | 29 Jul 24 11:32 UTC |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p running-upgrade-342576           | running-upgrade-342576    | jenkins | v1.33.1 | 29 Jul 24 11:31 UTC | 29 Jul 24 11:32 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	| start   | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:33 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-941459 sudo         | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-342576           | running-upgrade-342576    | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	| start   | -p pause-581851 --memory=2048       | pause-581851              | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:34 UTC |
	|         | --install-addons=false              |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2            |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:33 UTC |
	| start   | -p cert-expiration-338366           | cert-expiration-338366    | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:34 UTC |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h             |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:33 UTC |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC |                     |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:34 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-941459 sudo         | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:33 UTC |
	| start   | -p force-systemd-env-802488         | force-systemd-env-802488  | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p pause-581851                     | pause-581851              | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-338366           | cert-expiration-338366    | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	| start   | -p auto-184479 --memory=3072        | auto-184479               | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --alsologtostderr --wait=true       |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:34:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:34:08.732385   54676 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:34:08.732615   54676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:34:08.732624   54676 out.go:304] Setting ErrFile to fd 2...
	I0729 11:34:08.732628   54676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:34:08.732782   54676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:34:08.733419   54676 out.go:298] Setting JSON to false
	I0729 11:34:08.734749   54676 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4595,"bootTime":1722248254,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:34:08.734828   54676 start.go:139] virtualization: kvm guest
	I0729 11:34:08.737474   54676 out.go:177] * [auto-184479] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:34:08.739157   54676 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:34:08.739198   54676 notify.go:220] Checking for updates...
	I0729 11:34:08.742593   54676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:34:08.744256   54676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:34:08.745737   54676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:34:08.747235   54676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:34:08.748638   54676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:34:08.750766   54676 config.go:182] Loaded profile config "force-systemd-env-802488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:08.750925   54676 config.go:182] Loaded profile config "kubernetes-upgrade-302301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:34:08.751134   54676 config.go:182] Loaded profile config "pause-581851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:08.751263   54676 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:34:08.790791   54676 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 11:34:08.792115   54676 start.go:297] selected driver: kvm2
	I0729 11:34:08.792132   54676 start.go:901] validating driver "kvm2" against <nil>
	I0729 11:34:08.792148   54676 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:34:08.793218   54676 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:34:08.793307   54676 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:34:08.809460   54676 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:34:08.809519   54676 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:34:08.809820   54676 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:34:08.809865   54676 cni.go:84] Creating CNI manager for ""
	I0729 11:34:08.809877   54676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:34:08.809900   54676 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 11:34:08.809965   54676 start.go:340] cluster config:
	{Name:auto-184479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:34:08.810103   54676 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:34:08.812007   54676 out.go:177] * Starting "auto-184479" primary control-plane node in "auto-184479" cluster
	I0729 11:34:04.555850   54176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:34:04.555873   54176 crio.go:433] Images already preloaded, skipping extraction
	I0729 11:34:04.555923   54176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:34:04.688444   54176 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:34:04.688475   54176 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:34:04.688485   54176 kubeadm.go:934] updating node { 192.168.39.51 8443 v1.31.0-beta.0 crio true true} ...
	I0729 11:34:04.688612   54176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-302301 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-302301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:34:04.688697   54176 ssh_runner.go:195] Run: crio config
	I0729 11:34:05.005612   54176 cni.go:84] Creating CNI manager for ""
	I0729 11:34:05.005635   54176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:34:05.005648   54176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:34:05.005674   54176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-302301 NodeName:kubernetes-upgrade-302301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs
/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:34:05.005808   54176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-302301"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:34:05.005878   54176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 11:34:05.129660   54176 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:34:05.129742   54176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:34:05.248821   54176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0729 11:34:05.299050   54176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 11:34:05.389328   54176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0729 11:34:05.430997   54176 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I0729 11:34:05.443633   54176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:34:05.791943   54176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:34:05.842110   54176 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301 for IP: 192.168.39.51
	I0729 11:34:05.842140   54176 certs.go:194] generating shared ca certs ...
	I0729 11:34:05.842179   54176 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:34:05.842352   54176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:34:05.842412   54176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:34:05.842427   54176 certs.go:256] generating profile certs ...
	I0729 11:34:05.842583   54176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/client.key
	I0729 11:34:05.842670   54176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.key.bd723dad
	I0729 11:34:05.842754   54176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.key
	I0729 11:34:05.842948   54176 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:34:05.842998   54176 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:34:05.843012   54176 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:34:05.843044   54176 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:34:05.843090   54176 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:34:05.843139   54176 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:34:05.843202   54176 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:34:05.843886   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:34:05.903734   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:34:05.939583   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:34:05.981547   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:34:06.029737   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 11:34:06.063168   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:34:06.095035   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:34:06.130069   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kubernetes-upgrade-302301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:34:06.158673   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:34:06.185506   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:34:06.218890   54176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:34:06.357722   54176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:34:06.419973   54176 ssh_runner.go:195] Run: openssl version
	I0729 11:34:06.447936   54176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:34:06.478816   54176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:34:06.484778   54176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:34:06.484856   54176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:34:06.502440   54176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:34:06.523714   54176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:34:06.550490   54176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:34:06.568437   54176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:34:06.568503   54176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:34:06.577957   54176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:34:06.597863   54176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:34:06.644467   54176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:34:06.664253   54176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:34:06.664322   54176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:34:06.683992   54176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:34:06.699272   54176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:34:06.705485   54176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:34:06.714626   54176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:34:06.723704   54176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:34:06.734981   54176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:34:06.742485   54176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:34:06.749056   54176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:34:06.756370   54176 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-302301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-302301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:34:06.756470   54176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:34:06.756560   54176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:34:06.820545   54176 cri.go:89] found id: "2e112f49f319316114697ccb82838ddd605a7f08b6e3fbdad1891ad71e30fa2b"
	I0729 11:34:06.820575   54176 cri.go:89] found id: "45309c520f6627ff836bab94dbb3578fbaa62354b310b37190c4ba0de5a771be"
	I0729 11:34:06.820581   54176 cri.go:89] found id: "65336c94ddeb0585fd67da09c6bcb7f52de3b0dbd3fc96d5fc56553a5088ec33"
	I0729 11:34:06.820585   54176 cri.go:89] found id: "854196eb28d9ff96f58c3647ab5e1a3e2635a635254288f9e7d41377c2e5f6af"
	I0729 11:34:06.820589   54176 cri.go:89] found id: "2ef9254b6fd99988ec7e91d1a9db22bf1123ab4279b3278772bf9780c2d211bf"
	I0729 11:34:06.820593   54176 cri.go:89] found id: "2f558e6cf395191733bb64953fd431ecb97db8b9f99a7a20f4f9a323d1d6ed84"
	I0729 11:34:06.820597   54176 cri.go:89] found id: "f56c39b0f222fca4daf4ae3590256d825e11223e3417b9080db2295e51fc3b45"
	I0729 11:34:06.820601   54176 cri.go:89] found id: "8c4d324d2ceae38b7a824634ec9cc131cfc34b16e4f92e1608e86ca2957808d0"
	I0729 11:34:06.820604   54176 cri.go:89] found id: "2bf3a56b841df9bf7f900eef761364fd6bdeae8940e53977e8a8c8a5f23c06c6"
	I0729 11:34:06.820611   54176 cri.go:89] found id: "becdc244be29aa41a2b8aafcecb75ccf0d337fb1bf270a1d7bf7252fd1e4cbbd"
	I0729 11:34:06.820615   54176 cri.go:89] found id: "9af4aca13811fbcd622e047e95109438fe9fccdc4073a83010ddfa508f633f41"
	I0729 11:34:06.820620   54176 cri.go:89] found id: "439a4427d5bb8115b89439373f52bc31420959e1c97b3057cb737fa9d797738d"
	I0729 11:34:06.820623   54176 cri.go:89] found id: "3b4c33221955892dab9f92ebb1cce6071bad35cd91f8277be86ab896eea8b3cd"
	I0729 11:34:06.820630   54176 cri.go:89] found id: "2ad71b0057974ea1667f5225ffbd7cc805ab24f8814e7274553258fd4780df54"
	I0729 11:34:06.820636   54176 cri.go:89] found id: "6b5d574051fc74c6a112ee0dfd9305a0735adedfa86bde88a518eb073aa47053"
	I0729 11:34:06.820643   54176 cri.go:89] found id: ""
	I0729 11:34:06.820699   54176 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 11:34:30 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:30.990624068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252870990585876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dff637eb-8a7c-4d20-9205-8977f6b22975 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:34:30 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:30.991638374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97ef6424-867b-4816-a838-f911c7c8448a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:30 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:30.991719627Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97ef6424-867b-4816-a838-f911c7c8448a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:30 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:30.992663271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1876bf7f419570f84cb3e80cdfa12a47680bda31e8daee6a6111864403366b11,PodSandboxId:13bce25d0b036201d6abe06d2f7f2577489a936d393e796f46c30215c9c76d5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252867470018189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wntvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0670c43f-3fcb-43c5-a10a-a144bdaa60fb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f1edd7aaf9af3196cd20136ff269abce7ee3debd706e03855e53e55f9ab25,PodSandboxId:e0fe2b3c13887ef6cd5a26312c1ab7cc0af7b48516bd716742ba01f260222bc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722252867503037314,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: e114ffa3-15cd-4d46-8134-232bda4c6052,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c472432a612cf4de3f627de0f21716b03da4dcaea48ceb211feff8663e88500,PodSandboxId:890247380d5cf3b7b48c747aa96a1a83f24f1e0653078583221834d5970f1093,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252867479435126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dzppb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: bb07a624-f574-4c67-acd5-9ad2c8b207a1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524dbb47c1e8cfa84f19769d2dc76a437cd05310adb59681a8e792b8189d2ea3,PodSandboxId:cfcb96f0d58a80ddd383a6f70bcd618e93878e47e757fadad44e3cd603017548,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNI
NG,CreatedAt:1722252863668558847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2965388c75052a445fc94944ac846c8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87071c8b160da78dadcba3c89f0591fa340206a71e7547f0ec9e6e586db7e816,PodSandboxId:082a718d27539ac4fe781fd88271a57b8d45f3d225a5be6464db94ff96845f2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_R
UNNING,CreatedAt:1722252863675378200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c824085ae775d6a0f3ef41bb6c706af0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b447e9eb104b2f800ae9effba95c762daf188b6a647fe1f2ef2032a26d05544c,PodSandboxId:d7104215dfb2d44662aec188d5f72450d609d2c0aaec0402585fe5ab7bff2755,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Stat
e:CONTAINER_RUNNING,CreatedAt:1722252863653003339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc6dcde1469faf9b57f6036e956aa5d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e0b7fa500b571784f2ae06011e9fbdd2c4eb5ccd7fb163953a2e056d2bcda3,PodSandboxId:e0fe2b3c13887ef6cd5a26312c1ab7cc0af7b48516bd716742ba01f260222bc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_EXITED,CreatedAt:1722252856273364193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e114ffa3-15cd-4d46-8134-232bda4c6052,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e112f49f319316114697ccb82838ddd605a7f08b6e3fbdad1891ad71e30fa2b,PodSandboxId:082a718d27539ac4fe781fd88271a57b8d45f3d225a5be6464db94ff96845f2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAIN
ER_EXITED,CreatedAt:1722252846385443925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c824085ae775d6a0f3ef41bb6c706af0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45309c520f6627ff836bab94dbb3578fbaa62354b310b37190c4ba0de5a771be,PodSandboxId:d7104215dfb2d44662aec188d5f72450d609d2c0aaec0402585fe5ab7bff2755,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,S
tate:CONTAINER_EXITED,CreatedAt:1722252846297233284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc6dcde1469faf9b57f6036e956aa5d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65336c94ddeb0585fd67da09c6bcb7f52de3b0dbd3fc96d5fc56553a5088ec33,PodSandboxId:890247380d5cf3b7b48c747aa96a1a83f24f1e0653078583221834d5970f1093,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINE
R_EXITED,CreatedAt:1722252845603202238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dzppb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb07a624-f574-4c67-acd5-9ad2c8b207a1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854196eb28d9ff96f58c3647ab5e1a3e2635a635254288f9e7d41377c2e5f6af,PodSandboxId:13bce25d0b036201d6abe06d2f7f2577489a936d393e796f46c30215c9c76d5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252845381931560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wntvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0670c43f-3fcb-43c5-a10a-a144bdaa60fb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef9254b6fd99988ec7e91d1a9db22bf1123ab4279b3278772bf97
80c2d211bf,PodSandboxId:cfcb96f0d58a80ddd383a6f70bcd618e93878e47e757fadad44e3cd603017548,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722252844553296368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2965388c75052a445fc94944ac846c8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56c39b0f222fca4daf4ae3590256d825e11223e3417b9080db2295e51fc
3b45,PodSandboxId:c1f0b243c862030ead5b7fd9255c6ed38d5cdd2f98caedd31488ecca8c2c1840,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722252844413510295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htkpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a13faf1-84d6-486a-b60c-a53e58e3de16,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4d324d2ceae38b7a824634ec9cc131cfc34b16e4f92e1608e86ca2957808d0,PodSandboxId:a33f29fb538fc7a
0b900a92af1df2534a6b2837c5a42a0ca7c161a8b4b0bf3ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722252844337160943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aa3a3f3d2da8b3804e6dee2be89d02,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf3a56b841df9bf7f900eef761364fd6bdeae8940e53977e8a8c8a5f23c06c6,PodSandboxId:c0867a31d35899ff1e193ddd88bc341854eab965cd389f96956
2ab7b88bd706d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722252829849399879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htkpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a13faf1-84d6-486a-b60c-a53e58e3de16,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad71b0057974ea1667f5225ffbd7cc805ab24f8814e7274553258fd4780df54,PodSandboxId:1569aeb9879cb73e4de54fb61a0367868e4cd21d2f72e8e0bb0ab92c427c4214,Metadata:&ContainerM
etadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722252811623934810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aa3a3f3d2da8b3804e6dee2be89d02,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97ef6424-867b-4816-a838-f911c7c8448a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.050525737Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ce52861-4034-4ea5-898a-4179d6d472a3 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.050623927Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ce52861-4034-4ea5-898a-4179d6d472a3 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.052202536Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44e83b68-1af6-4c6a-a979-ea618328083e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.052847561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252871052817998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44e83b68-1af6-4c6a-a979-ea618328083e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.053824041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3ea34ab-ec25-4f6f-99cf-a84d9fd71155 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.053900539Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3ea34ab-ec25-4f6f-99cf-a84d9fd71155 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.054566113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1876bf7f419570f84cb3e80cdfa12a47680bda31e8daee6a6111864403366b11,PodSandboxId:13bce25d0b036201d6abe06d2f7f2577489a936d393e796f46c30215c9c76d5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252867470018189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wntvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0670c43f-3fcb-43c5-a10a-a144bdaa60fb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f1edd7aaf9af3196cd20136ff269abce7ee3debd706e03855e53e55f9ab25,PodSandboxId:e0fe2b3c13887ef6cd5a26312c1ab7cc0af7b48516bd716742ba01f260222bc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722252867503037314,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: e114ffa3-15cd-4d46-8134-232bda4c6052,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c472432a612cf4de3f627de0f21716b03da4dcaea48ceb211feff8663e88500,PodSandboxId:890247380d5cf3b7b48c747aa96a1a83f24f1e0653078583221834d5970f1093,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252867479435126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dzppb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: bb07a624-f574-4c67-acd5-9ad2c8b207a1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524dbb47c1e8cfa84f19769d2dc76a437cd05310adb59681a8e792b8189d2ea3,PodSandboxId:cfcb96f0d58a80ddd383a6f70bcd618e93878e47e757fadad44e3cd603017548,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNI
NG,CreatedAt:1722252863668558847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2965388c75052a445fc94944ac846c8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87071c8b160da78dadcba3c89f0591fa340206a71e7547f0ec9e6e586db7e816,PodSandboxId:082a718d27539ac4fe781fd88271a57b8d45f3d225a5be6464db94ff96845f2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_R
UNNING,CreatedAt:1722252863675378200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c824085ae775d6a0f3ef41bb6c706af0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b447e9eb104b2f800ae9effba95c762daf188b6a647fe1f2ef2032a26d05544c,PodSandboxId:d7104215dfb2d44662aec188d5f72450d609d2c0aaec0402585fe5ab7bff2755,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Stat
e:CONTAINER_RUNNING,CreatedAt:1722252863653003339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc6dcde1469faf9b57f6036e956aa5d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e0b7fa500b571784f2ae06011e9fbdd2c4eb5ccd7fb163953a2e056d2bcda3,PodSandboxId:e0fe2b3c13887ef6cd5a26312c1ab7cc0af7b48516bd716742ba01f260222bc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_EXITED,CreatedAt:1722252856273364193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e114ffa3-15cd-4d46-8134-232bda4c6052,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e112f49f319316114697ccb82838ddd605a7f08b6e3fbdad1891ad71e30fa2b,PodSandboxId:082a718d27539ac4fe781fd88271a57b8d45f3d225a5be6464db94ff96845f2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAIN
ER_EXITED,CreatedAt:1722252846385443925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c824085ae775d6a0f3ef41bb6c706af0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45309c520f6627ff836bab94dbb3578fbaa62354b310b37190c4ba0de5a771be,PodSandboxId:d7104215dfb2d44662aec188d5f72450d609d2c0aaec0402585fe5ab7bff2755,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,S
tate:CONTAINER_EXITED,CreatedAt:1722252846297233284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc6dcde1469faf9b57f6036e956aa5d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65336c94ddeb0585fd67da09c6bcb7f52de3b0dbd3fc96d5fc56553a5088ec33,PodSandboxId:890247380d5cf3b7b48c747aa96a1a83f24f1e0653078583221834d5970f1093,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINE
R_EXITED,CreatedAt:1722252845603202238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dzppb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb07a624-f574-4c67-acd5-9ad2c8b207a1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854196eb28d9ff96f58c3647ab5e1a3e2635a635254288f9e7d41377c2e5f6af,PodSandboxId:13bce25d0b036201d6abe06d2f7f2577489a936d393e796f46c30215c9c76d5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252845381931560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wntvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0670c43f-3fcb-43c5-a10a-a144bdaa60fb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef9254b6fd99988ec7e91d1a9db22bf1123ab4279b3278772bf97
80c2d211bf,PodSandboxId:cfcb96f0d58a80ddd383a6f70bcd618e93878e47e757fadad44e3cd603017548,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722252844553296368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2965388c75052a445fc94944ac846c8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56c39b0f222fca4daf4ae3590256d825e11223e3417b9080db2295e51fc
3b45,PodSandboxId:c1f0b243c862030ead5b7fd9255c6ed38d5cdd2f98caedd31488ecca8c2c1840,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722252844413510295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htkpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a13faf1-84d6-486a-b60c-a53e58e3de16,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4d324d2ceae38b7a824634ec9cc131cfc34b16e4f92e1608e86ca2957808d0,PodSandboxId:a33f29fb538fc7a
0b900a92af1df2534a6b2837c5a42a0ca7c161a8b4b0bf3ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722252844337160943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aa3a3f3d2da8b3804e6dee2be89d02,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf3a56b841df9bf7f900eef761364fd6bdeae8940e53977e8a8c8a5f23c06c6,PodSandboxId:c0867a31d35899ff1e193ddd88bc341854eab965cd389f96956
2ab7b88bd706d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722252829849399879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htkpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a13faf1-84d6-486a-b60c-a53e58e3de16,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad71b0057974ea1667f5225ffbd7cc805ab24f8814e7274553258fd4780df54,PodSandboxId:1569aeb9879cb73e4de54fb61a0367868e4cd21d2f72e8e0bb0ab92c427c4214,Metadata:&ContainerM
etadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722252811623934810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aa3a3f3d2da8b3804e6dee2be89d02,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3ea34ab-ec25-4f6f-99cf-a84d9fd71155 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.108056784Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9caa4325-9a23-4ce0-8060-04f21dfd8b35 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.108158792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9caa4325-9a23-4ce0-8060-04f21dfd8b35 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.110041860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97f2e981-8a26-44bc-a832-aa4edd1d5a5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.110677394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252871110640793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97f2e981-8a26-44bc-a832-aa4edd1d5a5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.111459687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=258b9d97-d725-43bf-afa9-30c310861a1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.111614541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=258b9d97-d725-43bf-afa9-30c310861a1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.112170167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1876bf7f419570f84cb3e80cdfa12a47680bda31e8daee6a6111864403366b11,PodSandboxId:13bce25d0b036201d6abe06d2f7f2577489a936d393e796f46c30215c9c76d5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252867470018189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wntvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0670c43f-3fcb-43c5-a10a-a144bdaa60fb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f1edd7aaf9af3196cd20136ff269abce7ee3debd706e03855e53e55f9ab25,PodSandboxId:e0fe2b3c13887ef6cd5a26312c1ab7cc0af7b48516bd716742ba01f260222bc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722252867503037314,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: e114ffa3-15cd-4d46-8134-232bda4c6052,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c472432a612cf4de3f627de0f21716b03da4dcaea48ceb211feff8663e88500,PodSandboxId:890247380d5cf3b7b48c747aa96a1a83f24f1e0653078583221834d5970f1093,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252867479435126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dzppb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: bb07a624-f574-4c67-acd5-9ad2c8b207a1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524dbb47c1e8cfa84f19769d2dc76a437cd05310adb59681a8e792b8189d2ea3,PodSandboxId:cfcb96f0d58a80ddd383a6f70bcd618e93878e47e757fadad44e3cd603017548,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNI
NG,CreatedAt:1722252863668558847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2965388c75052a445fc94944ac846c8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87071c8b160da78dadcba3c89f0591fa340206a71e7547f0ec9e6e586db7e816,PodSandboxId:082a718d27539ac4fe781fd88271a57b8d45f3d225a5be6464db94ff96845f2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_R
UNNING,CreatedAt:1722252863675378200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c824085ae775d6a0f3ef41bb6c706af0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b447e9eb104b2f800ae9effba95c762daf188b6a647fe1f2ef2032a26d05544c,PodSandboxId:d7104215dfb2d44662aec188d5f72450d609d2c0aaec0402585fe5ab7bff2755,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Stat
e:CONTAINER_RUNNING,CreatedAt:1722252863653003339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc6dcde1469faf9b57f6036e956aa5d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e0b7fa500b571784f2ae06011e9fbdd2c4eb5ccd7fb163953a2e056d2bcda3,PodSandboxId:e0fe2b3c13887ef6cd5a26312c1ab7cc0af7b48516bd716742ba01f260222bc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_EXITED,CreatedAt:1722252856273364193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e114ffa3-15cd-4d46-8134-232bda4c6052,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e112f49f319316114697ccb82838ddd605a7f08b6e3fbdad1891ad71e30fa2b,PodSandboxId:082a718d27539ac4fe781fd88271a57b8d45f3d225a5be6464db94ff96845f2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAIN
ER_EXITED,CreatedAt:1722252846385443925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c824085ae775d6a0f3ef41bb6c706af0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45309c520f6627ff836bab94dbb3578fbaa62354b310b37190c4ba0de5a771be,PodSandboxId:d7104215dfb2d44662aec188d5f72450d609d2c0aaec0402585fe5ab7bff2755,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,S
tate:CONTAINER_EXITED,CreatedAt:1722252846297233284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc6dcde1469faf9b57f6036e956aa5d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65336c94ddeb0585fd67da09c6bcb7f52de3b0dbd3fc96d5fc56553a5088ec33,PodSandboxId:890247380d5cf3b7b48c747aa96a1a83f24f1e0653078583221834d5970f1093,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINE
R_EXITED,CreatedAt:1722252845603202238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dzppb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb07a624-f574-4c67-acd5-9ad2c8b207a1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854196eb28d9ff96f58c3647ab5e1a3e2635a635254288f9e7d41377c2e5f6af,PodSandboxId:13bce25d0b036201d6abe06d2f7f2577489a936d393e796f46c30215c9c76d5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252845381931560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wntvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0670c43f-3fcb-43c5-a10a-a144bdaa60fb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef9254b6fd99988ec7e91d1a9db22bf1123ab4279b3278772bf97
80c2d211bf,PodSandboxId:cfcb96f0d58a80ddd383a6f70bcd618e93878e47e757fadad44e3cd603017548,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722252844553296368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2965388c75052a445fc94944ac846c8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56c39b0f222fca4daf4ae3590256d825e11223e3417b9080db2295e51fc
3b45,PodSandboxId:c1f0b243c862030ead5b7fd9255c6ed38d5cdd2f98caedd31488ecca8c2c1840,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722252844413510295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htkpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a13faf1-84d6-486a-b60c-a53e58e3de16,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4d324d2ceae38b7a824634ec9cc131cfc34b16e4f92e1608e86ca2957808d0,PodSandboxId:a33f29fb538fc7a
0b900a92af1df2534a6b2837c5a42a0ca7c161a8b4b0bf3ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722252844337160943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aa3a3f3d2da8b3804e6dee2be89d02,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf3a56b841df9bf7f900eef761364fd6bdeae8940e53977e8a8c8a5f23c06c6,PodSandboxId:c0867a31d35899ff1e193ddd88bc341854eab965cd389f96956
2ab7b88bd706d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722252829849399879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htkpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a13faf1-84d6-486a-b60c-a53e58e3de16,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad71b0057974ea1667f5225ffbd7cc805ab24f8814e7274553258fd4780df54,PodSandboxId:1569aeb9879cb73e4de54fb61a0367868e4cd21d2f72e8e0bb0ab92c427c4214,Metadata:&ContainerM
etadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722252811623934810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aa3a3f3d2da8b3804e6dee2be89d02,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=258b9d97-d725-43bf-afa9-30c310861a1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.153755206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b5b583a-b9d4-4aff-a5e2-847b1a375453 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.153859244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b5b583a-b9d4-4aff-a5e2-847b1a375453 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.154973696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3ae10d6-47b7-405a-a0d4-00e3a0d98d3a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.155410913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252871155387271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3ae10d6-47b7-405a-a0d4-00e3a0d98d3a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.155838000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aecb4346-64ad-4b3e-8e18-29c3a04022b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.155894972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aecb4346-64ad-4b3e-8e18-29c3a04022b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:34:31 kubernetes-upgrade-302301 crio[2380]: time="2024-07-29 11:34:31.156302385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1876bf7f419570f84cb3e80cdfa12a47680bda31e8daee6a6111864403366b11,PodSandboxId:13bce25d0b036201d6abe06d2f7f2577489a936d393e796f46c30215c9c76d5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252867470018189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wntvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0670c43f-3fcb-43c5-a10a-a144bdaa60fb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f1edd7aaf9af3196cd20136ff269abce7ee3debd706e03855e53e55f9ab25,PodSandboxId:e0fe2b3c13887ef6cd5a26312c1ab7cc0af7b48516bd716742ba01f260222bc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722252867503037314,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: e114ffa3-15cd-4d46-8134-232bda4c6052,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c472432a612cf4de3f627de0f21716b03da4dcaea48ceb211feff8663e88500,PodSandboxId:890247380d5cf3b7b48c747aa96a1a83f24f1e0653078583221834d5970f1093,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252867479435126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dzppb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: bb07a624-f574-4c67-acd5-9ad2c8b207a1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524dbb47c1e8cfa84f19769d2dc76a437cd05310adb59681a8e792b8189d2ea3,PodSandboxId:cfcb96f0d58a80ddd383a6f70bcd618e93878e47e757fadad44e3cd603017548,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNI
NG,CreatedAt:1722252863668558847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2965388c75052a445fc94944ac846c8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87071c8b160da78dadcba3c89f0591fa340206a71e7547f0ec9e6e586db7e816,PodSandboxId:082a718d27539ac4fe781fd88271a57b8d45f3d225a5be6464db94ff96845f2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_R
UNNING,CreatedAt:1722252863675378200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c824085ae775d6a0f3ef41bb6c706af0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b447e9eb104b2f800ae9effba95c762daf188b6a647fe1f2ef2032a26d05544c,PodSandboxId:d7104215dfb2d44662aec188d5f72450d609d2c0aaec0402585fe5ab7bff2755,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Stat
e:CONTAINER_RUNNING,CreatedAt:1722252863653003339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc6dcde1469faf9b57f6036e956aa5d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e0b7fa500b571784f2ae06011e9fbdd2c4eb5ccd7fb163953a2e056d2bcda3,PodSandboxId:e0fe2b3c13887ef6cd5a26312c1ab7cc0af7b48516bd716742ba01f260222bc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_EXITED,CreatedAt:1722252856273364193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e114ffa3-15cd-4d46-8134-232bda4c6052,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e112f49f319316114697ccb82838ddd605a7f08b6e3fbdad1891ad71e30fa2b,PodSandboxId:082a718d27539ac4fe781fd88271a57b8d45f3d225a5be6464db94ff96845f2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAIN
ER_EXITED,CreatedAt:1722252846385443925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c824085ae775d6a0f3ef41bb6c706af0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45309c520f6627ff836bab94dbb3578fbaa62354b310b37190c4ba0de5a771be,PodSandboxId:d7104215dfb2d44662aec188d5f72450d609d2c0aaec0402585fe5ab7bff2755,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,S
tate:CONTAINER_EXITED,CreatedAt:1722252846297233284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc6dcde1469faf9b57f6036e956aa5d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65336c94ddeb0585fd67da09c6bcb7f52de3b0dbd3fc96d5fc56553a5088ec33,PodSandboxId:890247380d5cf3b7b48c747aa96a1a83f24f1e0653078583221834d5970f1093,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINE
R_EXITED,CreatedAt:1722252845603202238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dzppb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb07a624-f574-4c67-acd5-9ad2c8b207a1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854196eb28d9ff96f58c3647ab5e1a3e2635a635254288f9e7d41377c2e5f6af,PodSandboxId:13bce25d0b036201d6abe06d2f7f2577489a936d393e796f46c30215c9c76d5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252845381931560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wntvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0670c43f-3fcb-43c5-a10a-a144bdaa60fb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef9254b6fd99988ec7e91d1a9db22bf1123ab4279b3278772bf97
80c2d211bf,PodSandboxId:cfcb96f0d58a80ddd383a6f70bcd618e93878e47e757fadad44e3cd603017548,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722252844553296368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2965388c75052a445fc94944ac846c8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56c39b0f222fca4daf4ae3590256d825e11223e3417b9080db2295e51fc
3b45,PodSandboxId:c1f0b243c862030ead5b7fd9255c6ed38d5cdd2f98caedd31488ecca8c2c1840,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722252844413510295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htkpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a13faf1-84d6-486a-b60c-a53e58e3de16,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4d324d2ceae38b7a824634ec9cc131cfc34b16e4f92e1608e86ca2957808d0,PodSandboxId:a33f29fb538fc7a
0b900a92af1df2534a6b2837c5a42a0ca7c161a8b4b0bf3ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722252844337160943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aa3a3f3d2da8b3804e6dee2be89d02,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf3a56b841df9bf7f900eef761364fd6bdeae8940e53977e8a8c8a5f23c06c6,PodSandboxId:c0867a31d35899ff1e193ddd88bc341854eab965cd389f96956
2ab7b88bd706d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722252829849399879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htkpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a13faf1-84d6-486a-b60c-a53e58e3de16,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad71b0057974ea1667f5225ffbd7cc805ab24f8814e7274553258fd4780df54,PodSandboxId:1569aeb9879cb73e4de54fb61a0367868e4cd21d2f72e8e0bb0ab92c427c4214,Metadata:&ContainerM
etadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722252811623934810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aa3a3f3d2da8b3804e6dee2be89d02,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aecb4346-64ad-4b3e-8e18-29c3a04022b8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b0f1edd7aaf9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   e0fe2b3c13887       storage-provisioner
	6c472432a612c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   890247380d5cf       coredns-5cfdc65f69-dzppb
	1876bf7f41957       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   13bce25d0b036       coredns-5cfdc65f69-wntvz
	87071c8b160da       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   3                   082a718d27539       kube-controller-manager-kubernetes-upgrade-302301
	524dbb47c1e8c       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            2                   cfcb96f0d58a8       kube-scheduler-kubernetes-upgrade-302301
	b447e9eb104b2       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            3                   d7104215dfb2d       kube-apiserver-kubernetes-upgrade-302301
	57e0b7fa500b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       2                   e0fe2b3c13887       storage-provisioner
	2e112f49f3193       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   24 seconds ago      Exited              kube-controller-manager   2                   082a718d27539       kube-controller-manager-kubernetes-upgrade-302301
	45309c520f662       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   24 seconds ago      Exited              kube-apiserver            2                   d7104215dfb2d       kube-apiserver-kubernetes-upgrade-302301
	65336c94ddeb0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   25 seconds ago      Exited              coredns                   1                   890247380d5cf       coredns-5cfdc65f69-dzppb
	854196eb28d9f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   25 seconds ago      Exited              coredns                   1                   13bce25d0b036       coredns-5cfdc65f69-wntvz
	2ef9254b6fd99       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   26 seconds ago      Exited              kube-scheduler            1                   cfcb96f0d58a8       kube-scheduler-kubernetes-upgrade-302301
	f56c39b0f222f       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   26 seconds ago      Running             kube-proxy                1                   c1f0b243c8620       kube-proxy-htkpt
	8c4d324d2ceae       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   26 seconds ago      Running             etcd                      1                   a33f29fb538fc       etcd-kubernetes-upgrade-302301
	2bf3a56b841df       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   41 seconds ago      Exited              kube-proxy                0                   c0867a31d3589       kube-proxy-htkpt
	2ad71b0057974       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   59 seconds ago      Exited              etcd                      0                   1569aeb9879cb       etcd-kubernetes-upgrade-302301
	
	
	==> coredns [1876bf7f419570f84cb3e80cdfa12a47680bda31e8daee6a6111864403366b11] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [65336c94ddeb0585fd67da09c6bcb7f52de3b0dbd3fc96d5fc56553a5088ec33] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6c472432a612cf4de3f627de0f21716b03da4dcaea48ceb211feff8663e88500] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [854196eb28d9ff96f58c3647ab5e1a3e2635a635254288f9e7d41377c2e5f6af] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-302301
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-302301
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:33:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-302301
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:34:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:34:26 +0000   Mon, 29 Jul 2024 11:33:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:34:26 +0000   Mon, 29 Jul 2024 11:33:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:34:26 +0000   Mon, 29 Jul 2024 11:33:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:34:26 +0000   Mon, 29 Jul 2024 11:33:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    kubernetes-upgrade-302301
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5d25a94468142ed8ef97a168bd6d9ff
	  System UUID:                f5d25a94-4681-42ed-8ef9-7a168bd6d9ff
	  Boot ID:                    f84fb858-de77-4317-9569-295dc509bdf7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-dzppb                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     44s
	  kube-system                 coredns-5cfdc65f69-wntvz                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     44s
	  kube-system                 etcd-kubernetes-upgrade-302301                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         50s
	  kube-system                 kube-apiserver-kubernetes-upgrade-302301             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-302301    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-proxy-htkpt                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-scheduler-kubernetes-upgrade-302301             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)  kubelet          Node kubernetes-upgrade-302301 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)  kubelet          Node kubernetes-upgrade-302301 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x7 over 80s)  kubelet          Node kubernetes-upgrade-302301 status is now: NodeHasSufficientPID
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           46s                node-controller  Node kubernetes-upgrade-302301 event: Registered Node kubernetes-upgrade-302301 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-302301 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-302301 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-302301 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-302301 event: Registered Node kubernetes-upgrade-302301 in Controller
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul29 11:33] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.059115] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080412] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.225893] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.148241] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.346290] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +4.453736] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.066396] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.464726] systemd-fstab-generator[856]: Ignoring "noauto" option for root device
	[ +12.796884] kauditd_printk_skb: 87 callbacks suppressed
	[ +24.251078] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.106062] kauditd_printk_skb: 10 callbacks suppressed
	[Jul29 11:34] systemd-fstab-generator[2299]: Ignoring "noauto" option for root device
	[  +0.101177] kauditd_printk_skb: 94 callbacks suppressed
	[  +0.058141] systemd-fstab-generator[2311]: Ignoring "noauto" option for root device
	[  +0.194555] systemd-fstab-generator[2325]: Ignoring "noauto" option for root device
	[  +0.173248] systemd-fstab-generator[2337]: Ignoring "noauto" option for root device
	[  +0.316247] systemd-fstab-generator[2365]: Ignoring "noauto" option for root device
	[  +2.814724] systemd-fstab-generator[3136]: Ignoring "noauto" option for root device
	[  +1.726228] kauditd_printk_skb: 231 callbacks suppressed
	[ +15.601094] systemd-fstab-generator[3611]: Ignoring "noauto" option for root device
	[  +5.918592] systemd-fstab-generator[4040]: Ignoring "noauto" option for root device
	[  +0.131528] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [2ad71b0057974ea1667f5225ffbd7cc805ab24f8814e7274553258fd4780df54] <==
	{"level":"warn","ts":"2024-07-29T11:33:37.360357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.149582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-302301\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-07-29T11:33:37.360395Z","caller":"traceutil/trace.go:171","msg":"trace[414656735] range","detail":"{range_begin:/registry/leases/kube-node-lease/kubernetes-upgrade-302301; range_end:; response_count:0; response_revision:81; }","duration":"227.190411ms","start":"2024-07-29T11:33:37.133199Z","end":"2024-07-29T11:33:37.36039Z","steps":["trace[414656735] 'agreement among raft nodes before linearized reading'  (duration: 227.137177ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:33:37.688653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.617232ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10748562882416209908 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-302301\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-302301\" value_size:523 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2024-07-29T11:33:37.68874Z","caller":"traceutil/trace.go:171","msg":"trace[1528810371] linearizableReadLoop","detail":"{readStateIndex:87; appliedIndex:86; }","duration":"260.135713ms","start":"2024-07-29T11:33:37.428591Z","end":"2024-07-29T11:33:37.688727Z","steps":["trace[1528810371] 'read index received'  (duration: 61.274599ms)","trace[1528810371] 'applied index is now lower than readState.Index'  (duration: 198.860061ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:33:37.688816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.239071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:node\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-07-29T11:33:37.68886Z","caller":"traceutil/trace.go:171","msg":"trace[1185974283] range","detail":"{range_begin:/registry/clusterroles/system:node; range_end:; response_count:0; response_revision:83; }","duration":"260.287518ms","start":"2024-07-29T11:33:37.428565Z","end":"2024-07-29T11:33:37.688853Z","steps":["trace[1185974283] 'agreement among raft nodes before linearized reading'  (duration: 260.196613ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:33:37.689158Z","caller":"traceutil/trace.go:171","msg":"trace[1362692398] transaction","detail":"{read_only:false; response_revision:83; number_of_response:1; }","duration":"321.137245ms","start":"2024-07-29T11:33:37.36801Z","end":"2024-07-29T11:33:37.689147Z","steps":["trace[1362692398] 'process raft request'  (duration: 121.955937ms)","trace[1362692398] 'compare'  (duration: 198.489397ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:33:37.689233Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:33:37.367999Z","time spent":"321.206936ms","remote":"127.0.0.1:44626","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":589,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-302301\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-302301\" value_size:523 >> failure:<>"}
	{"level":"info","ts":"2024-07-29T11:33:39.336763Z","caller":"traceutil/trace.go:171","msg":"trace[1879204225] linearizableReadLoop","detail":"{readStateIndex:205; appliedIndex:205; }","duration":"114.318879ms","start":"2024-07-29T11:33:39.222426Z","end":"2024-07-29T11:33:39.336744Z","steps":["trace[1879204225] 'read index received'  (duration: 114.309184ms)","trace[1879204225] 'applied index is now lower than readState.Index'  (duration: 5.463µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:33:39.337069Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.626973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T11:33:39.337643Z","caller":"traceutil/trace.go:171","msg":"trace[9185133] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler; range_end:; response_count:0; response_revision:199; }","duration":"115.203172ms","start":"2024-07-29T11:33:39.222422Z","end":"2024-07-29T11:33:39.337625Z","steps":["trace[9185133] 'agreement among raft nodes before linearized reading'  (duration: 114.607543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:33:39.342911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.318148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T11:33:39.343046Z","caller":"traceutil/trace.go:171","msg":"trace[623294096] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:200; }","duration":"119.524277ms","start":"2024-07-29T11:33:39.223512Z","end":"2024-07-29T11:33:39.343036Z","steps":["trace[623294096] 'agreement among raft nodes before linearized reading'  (duration: 119.297058ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:33:39.343348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.885495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T11:33:39.343483Z","caller":"traceutil/trace.go:171","msg":"trace[627343846] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:200; }","duration":"121.023478ms","start":"2024-07-29T11:33:39.222453Z","end":"2024-07-29T11:33:39.343477Z","steps":["trace[627343846] 'agreement among raft nodes before linearized reading'  (duration: 120.872869ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:33:55.146691Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T11:33:55.146795Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-302301","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.51:2380"],"advertise-client-urls":["https://192.168.39.51:2379"]}
	{"level":"warn","ts":"2024-07-29T11:33:55.1469Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:33:55.147035Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:33:55.253706Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.51:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:33:55.25377Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.51:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T11:33:55.253837Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9049a3446d48952a","current-leader-member-id":"9049a3446d48952a"}
	{"level":"info","ts":"2024-07-29T11:33:55.256677Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.39.51:2380"}
	{"level":"info","ts":"2024-07-29T11:33:55.256819Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.39.51:2380"}
	{"level":"info","ts":"2024-07-29T11:33:55.256845Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-302301","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.51:2380"],"advertise-client-urls":["https://192.168.39.51:2379"]}
	
	
	==> etcd [8c4d324d2ceae38b7a824634ec9cc131cfc34b16e4f92e1608e86ca2957808d0] <==
	{"level":"info","ts":"2024-07-29T11:34:04.966536Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T11:34:04.966782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a switched to configuration voters=(10397020729048077610)"}
	{"level":"info","ts":"2024-07-29T11:34:04.966803Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9049a3446d48952a","initial-advertise-peer-urls":["https://192.168.39.51:2380"],"listen-peer-urls":["https://192.168.39.51:2380"],"advertise-client-urls":["https://192.168.39.51:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.51:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:34:04.966832Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T11:34:04.966969Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.51:2380"}
	{"level":"info","ts":"2024-07-29T11:34:04.966978Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.51:2380"}
	{"level":"info","ts":"2024-07-29T11:34:04.966978Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ec92057c53901c6c","local-member-id":"9049a3446d48952a","added-peer-id":"9049a3446d48952a","added-peer-peer-urls":["https://192.168.39.51:2380"]}
	{"level":"info","ts":"2024-07-29T11:34:04.967137Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec92057c53901c6c","local-member-id":"9049a3446d48952a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:34:04.967172Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:34:06.25984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T11:34:06.259918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:34:06.259966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a received MsgPreVoteResp from 9049a3446d48952a at term 2"}
	{"level":"info","ts":"2024-07-29T11:34:06.25999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:06.26Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a received MsgVoteResp from 9049a3446d48952a at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:06.260016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a became leader at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:06.260077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9049a3446d48952a elected leader 9049a3446d48952a at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:06.265592Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9049a3446d48952a","local-member-attributes":"{Name:kubernetes-upgrade-302301 ClientURLs:[https://192.168.39.51:2379]}","request-path":"/0/members/9049a3446d48952a/attributes","cluster-id":"ec92057c53901c6c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:34:06.265856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:34:06.267388Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T11:34:06.270476Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:34:06.271332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T11:34:06.276429Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:34:06.276487Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:34:06.277428Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T11:34:06.27873Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.51:2379"}
	
	
	==> kernel <==
	 11:34:31 up 1 min,  0 users,  load average: 1.62, 0.50, 0.17
	Linux kubernetes-upgrade-302301 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [45309c520f6627ff836bab94dbb3578fbaa62354b310b37190c4ba0de5a771be] <==
	I0729 11:34:08.942513       1 controller.go:142] Starting OpenAPI controller
	I0729 11:34:08.942617       1 controller.go:90] Starting OpenAPI V3 controller
	I0729 11:34:08.942656       1 naming_controller.go:294] Starting NamingConditionController
	I0729 11:34:08.942669       1 establishing_controller.go:79] Starting EstablishingController
	I0729 11:34:08.942699       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0729 11:34:08.942710       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 11:34:08.942719       1 crd_finalizer.go:269] Starting CRDFinalizer
	E0729 11:34:08.943180       1 controller.go:131] Unable to remove endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.51, ResourceVersion: 0, AdditionalErrorMsg: 
	I0729 11:34:08.945387       1 crd_finalizer.go:273] Shutting down CRDFinalizer
	I0729 11:34:08.946360       1 apiapproval_controller.go:193] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0729 11:34:08.946471       1 nonstructuralschema_controller.go:199] Shutting down NonStructuralSchemaConditionController
	I0729 11:34:08.946552       1 establishing_controller.go:83] Shutting down EstablishingController
	I0729 11:34:08.946604       1 naming_controller.go:298] Shutting down NamingConditionController
	I0729 11:34:08.946679       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	E0729 11:34:08.946690       1 controller.go:95] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E0729 11:34:08.946782       1 controller.go:148] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	I0729 11:34:08.946925       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	E0729 11:34:08.946955       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for crd-autoregister" logger="UnhandledError"
	F0729 11:34:08.947018       1 hooks.go:210] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	I0729 11:34:08.986229       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	F0729 11:34:09.011911       1 hooks.go:210] PostStartHook "crd-informer-synced" failed: timed out waiting for the condition
	E0729 11:34:09.011986       1 customresource_discovery_controller.go:295] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E0729 11:34:09.012001       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for configmaps" logger="UnhandledError"
	I0729 11:34:09.012020       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 11:34:09.011943       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	
	
	==> kube-apiserver [b447e9eb104b2f800ae9effba95c762daf188b6a647fe1f2ef2032a26d05544c] <==
	I0729 11:34:26.155372       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 11:34:26.240080       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 11:34:26.241010       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 11:34:26.241089       1 policy_source.go:224] refreshing policies
	I0729 11:34:26.248954       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 11:34:26.315628       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 11:34:26.315692       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 11:34:26.316123       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 11:34:26.316895       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 11:34:26.334857       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 11:34:26.334891       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0729 11:34:26.334897       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0729 11:34:26.336942       1 aggregator.go:171] initial CRD sync complete...
	I0729 11:34:26.337021       1 autoregister_controller.go:144] Starting autoregister controller
	I0729 11:34:26.337045       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 11:34:26.337068       1 cache.go:39] Caches are synced for autoregister controller
	I0729 11:34:26.342371       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0729 11:34:27.123162       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 11:34:27.781605       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:34:28.490888       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 11:34:28.504695       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 11:34:28.559599       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 11:34:28.672855       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 11:34:28.681668       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 11:34:30.650490       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2e112f49f319316114697ccb82838ddd605a7f08b6e3fbdad1891ad71e30fa2b] <==
	
	
	==> kube-controller-manager [87071c8b160da78dadcba3c89f0591fa340206a71e7547f0ec9e6e586db7e816] <==
	I0729 11:34:30.683321       1 shared_informer.go:320] Caches are synced for expand
	I0729 11:34:30.688804       1 shared_informer.go:320] Caches are synced for node
	I0729 11:34:30.688886       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0729 11:34:30.688929       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0729 11:34:30.688955       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0729 11:34:30.688962       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0729 11:34:30.689061       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-302301"
	I0729 11:34:30.714422       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 11:34:30.729308       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 11:34:30.729348       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 11:34:30.731656       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 11:34:30.731673       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 11:34:30.734332       1 shared_informer.go:320] Caches are synced for taint
	I0729 11:34:30.734481       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 11:34:30.734548       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-302301"
	I0729 11:34:30.734582       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 11:34:30.745227       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0729 11:34:30.746510       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 11:34:30.747735       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 11:34:30.756215       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 11:34:30.783490       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 11:34:30.783546       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 11:34:30.804326       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 11:34:30.850641       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 11:34:30.870111       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [2bf3a56b841df9bf7f900eef761364fd6bdeae8940e53977e8a8c8a5f23c06c6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 11:33:50.027523       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 11:33:50.038608       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.51"]
	E0729 11:33:50.038713       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 11:33:50.078004       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 11:33:50.078064       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:33:50.078104       1 server_linux.go:170] "Using iptables Proxier"
	I0729 11:33:50.081027       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 11:33:50.081608       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 11:33:50.081636       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:33:50.083507       1 config.go:197] "Starting service config controller"
	I0729 11:33:50.083704       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:33:50.083753       1 config.go:104] "Starting endpoint slice config controller"
	I0729 11:33:50.083758       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:33:50.084605       1 config.go:326] "Starting node config controller"
	I0729 11:33:50.084639       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:33:50.184960       1 shared_informer.go:320] Caches are synced for node config
	I0729 11:33:50.185079       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:33:50.185117       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f56c39b0f222fca4daf4ae3590256d825e11223e3417b9080db2295e51fc3b45] <==
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 11:34:06.106055       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-302301\": dial tcp 192.168.39.51:8443: connect: connection refused"
	E0729 11:34:10.018186       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-302301\": dial tcp 192.168.39.51:8443: connect: connection refused"
	E0729 11:34:12.314933       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-302301\": dial tcp 192.168.39.51:8443: connect: connection refused"
	E0729 11:34:16.901428       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-302301\": dial tcp 192.168.39.51:8443: connect: connection refused"
	I0729 11:34:26.293559       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.51"]
	E0729 11:34:26.293680       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 11:34:26.394434       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 11:34:26.394524       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:34:26.394569       1 server_linux.go:170] "Using iptables Proxier"
	I0729 11:34:26.400101       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 11:34:26.401781       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 11:34:26.401815       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:34:26.404089       1 config.go:197] "Starting service config controller"
	I0729 11:34:26.404155       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:34:26.404182       1 config.go:104] "Starting endpoint slice config controller"
	I0729 11:34:26.404186       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:34:26.404819       1 config.go:326] "Starting node config controller"
	I0729 11:34:26.404849       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:34:26.504411       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:34:26.504644       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:34:26.505328       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2ef9254b6fd99988ec7e91d1a9db22bf1123ab4279b3278772bf9780c2d211bf] <==
	E0729 11:34:18.099192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.51:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:18.148346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.51:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:18.148423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.51:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:18.502159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:18.502233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:18.533661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.51:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:18.533710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.51:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:18.633627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.51:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:18.633688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.51:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:18.672057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.51:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:18.672099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.51:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:19.208989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.51:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:19.209054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.51:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:19.411058       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.51:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:19.411104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.51:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:19.419412       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.51:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:19.419481       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.51:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:19.538714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.51:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:19.538776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.51:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:20.039397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.51:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:20.039476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.51:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	W0729 11:34:20.516873       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.51:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0729 11:34:20.516946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.51:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.51:8443: connect: connection refused" logger="UnhandledError"
	E0729 11:34:21.112651       1 server.go:237] "waiting for handlers to sync" err="context canceled"
	E0729 11:34:21.112768       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [524dbb47c1e8cfa84f19769d2dc76a437cd05310adb59681a8e792b8189d2ea3] <==
	I0729 11:34:24.871383       1 serving.go:386] Generated self-signed cert in-memory
	W0729 11:34:26.195784       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 11:34:26.195896       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:34:26.195929       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 11:34:26.196017       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 11:34:26.251675       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 11:34:26.251715       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:34:26.262930       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 11:34:26.263147       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 11:34:26.263200       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:34:26.263236       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 11:34:26.364353       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 11:34:23 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:23.451643    3618 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-302301"
	Jul 29 11:34:23 kubernetes-upgrade-302301 kubelet[3618]: E0729 11:34:23.452664    3618 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.51:8443: connect: connection refused" node="kubernetes-upgrade-302301"
	Jul 29 11:34:23 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:23.637863    3618 scope.go:117] "RemoveContainer" containerID="45309c520f6627ff836bab94dbb3578fbaa62354b310b37190c4ba0de5a771be"
	Jul 29 11:34:23 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:23.638209    3618 scope.go:117] "RemoveContainer" containerID="2e112f49f319316114697ccb82838ddd605a7f08b6e3fbdad1891ad71e30fa2b"
	Jul 29 11:34:23 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:23.639825    3618 scope.go:117] "RemoveContainer" containerID="2ef9254b6fd99988ec7e91d1a9db22bf1123ab4279b3278772bf9780c2d211bf"
	Jul 29 11:34:23 kubernetes-upgrade-302301 kubelet[3618]: E0729 11:34:23.758153    3618 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-302301?timeout=10s\": dial tcp 192.168.39.51:8443: connect: connection refused" interval="800ms"
	Jul 29 11:34:23 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:23.854926    3618 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-302301"
	Jul 29 11:34:23 kubernetes-upgrade-302301 kubelet[3618]: E0729 11:34:23.855824    3618 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.51:8443: connect: connection refused" node="kubernetes-upgrade-302301"
	Jul 29 11:34:24 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:24.658376    3618 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-302301"
	Jul 29 11:34:26 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:26.302491    3618 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-302301"
	Jul 29 11:34:26 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:26.303112    3618 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-302301"
	Jul 29 11:34:26 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:26.303384    3618 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 11:34:26 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:26.305429    3618 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 11:34:26 kubernetes-upgrade-302301 kubelet[3618]: E0729 11:34:26.396151    3618 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-302301\" already exists" pod="kube-system/etcd-kubernetes-upgrade-302301"
	Jul 29 11:34:26 kubernetes-upgrade-302301 kubelet[3618]: E0729 11:34:26.396864    3618 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-302301\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-302301"
	Jul 29 11:34:27 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:27.133875    3618 apiserver.go:52] "Watching apiserver"
	Jul 29 11:34:27 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:27.155336    3618 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 29 11:34:27 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:27.224374    3618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e114ffa3-15cd-4d46-8134-232bda4c6052-tmp\") pod \"storage-provisioner\" (UID: \"e114ffa3-15cd-4d46-8134-232bda4c6052\") " pod="kube-system/storage-provisioner"
	Jul 29 11:34:27 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:27.224565    3618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a13faf1-84d6-486a-b60c-a53e58e3de16-xtables-lock\") pod \"kube-proxy-htkpt\" (UID: \"2a13faf1-84d6-486a-b60c-a53e58e3de16\") " pod="kube-system/kube-proxy-htkpt"
	Jul 29 11:34:27 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:27.224803    3618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a13faf1-84d6-486a-b60c-a53e58e3de16-lib-modules\") pod \"kube-proxy-htkpt\" (UID: \"2a13faf1-84d6-486a-b60c-a53e58e3de16\") " pod="kube-system/kube-proxy-htkpt"
	Jul 29 11:34:27 kubernetes-upgrade-302301 kubelet[3618]: E0729 11:34:27.363938    3618 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-302301\" already exists" pod="kube-system/etcd-kubernetes-upgrade-302301"
	Jul 29 11:34:27 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:27.440957    3618 scope.go:117] "RemoveContainer" containerID="65336c94ddeb0585fd67da09c6bcb7f52de3b0dbd3fc96d5fc56553a5088ec33"
	Jul 29 11:34:27 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:27.441665    3618 scope.go:117] "RemoveContainer" containerID="854196eb28d9ff96f58c3647ab5e1a3e2635a635254288f9e7d41377c2e5f6af"
	Jul 29 11:34:27 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:27.443536    3618 scope.go:117] "RemoveContainer" containerID="57e0b7fa500b571784f2ae06011e9fbdd2c4eb5ccd7fb163953a2e056d2bcda3"
	Jul 29 11:34:30 kubernetes-upgrade-302301 kubelet[3618]: I0729 11:34:30.253578    3618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [57e0b7fa500b571784f2ae06011e9fbdd2c4eb5ccd7fb163953a2e056d2bcda3] <==
	I0729 11:34:16.367814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 11:34:16.369818       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [9b0f1edd7aaf9af3196cd20136ff269abce7ee3debd706e03855e53e55f9ab25] <==
	I0729 11:34:27.732940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:34:27.759150       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:34:27.759436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:34:27.798551       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:34:27.798779       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-302301_63b0e3ee-8027-48ef-a028-18853f1e21bc!
	I0729 11:34:27.800051       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9139161b-6da5-4cf6-a164-e78d662fe25f", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-302301_63b0e3ee-8027-48ef-a028-18853f1e21bc became leader
	I0729 11:34:27.900364       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-302301_63b0e3ee-8027-48ef-a028-18853f1e21bc!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:34:30.565176   54903 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19337-3845/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-302301 -n kubernetes-upgrade-302301
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-302301 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-scheduler-kubernetes-upgrade-302301
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-302301 describe pod kube-scheduler-kubernetes-upgrade-302301
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-302301 describe pod kube-scheduler-kubernetes-upgrade-302301: exit status 1 (70.690917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-scheduler-kubernetes-upgrade-302301" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-302301 describe pod kube-scheduler-kubernetes-upgrade-302301: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-302301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-302301
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-302301: (1.538237973s)
--- FAIL: TestKubernetesUpgrade (389.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (88.33s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-581851 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-581851 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.982918123s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-581851] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-581851" primary control-plane node in "pause-581851" cluster
	* Updating the running kvm2 "pause-581851" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-581851" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:34:02.295941   54471 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:34:02.296405   54471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:34:02.296435   54471 out.go:304] Setting ErrFile to fd 2...
	I0729 11:34:02.296446   54471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:34:02.297369   54471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:34:02.297950   54471 out.go:298] Setting JSON to false
	I0729 11:34:02.298951   54471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4588,"bootTime":1722248254,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:34:02.299015   54471 start.go:139] virtualization: kvm guest
	I0729 11:34:02.301388   54471 out.go:177] * [pause-581851] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:34:02.303443   54471 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:34:02.303488   54471 notify.go:220] Checking for updates...
	I0729 11:34:02.306504   54471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:34:02.308205   54471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:34:02.309630   54471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:34:02.311077   54471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:34:02.312600   54471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:34:02.314659   54471 config.go:182] Loaded profile config "pause-581851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:02.315298   54471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:34:02.315362   54471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:34:02.331700   54471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37297
	I0729 11:34:02.332111   54471 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:34:02.332642   54471 main.go:141] libmachine: Using API Version  1
	I0729 11:34:02.332693   54471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:34:02.333045   54471 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:34:02.333255   54471 main.go:141] libmachine: (pause-581851) Calling .DriverName
	I0729 11:34:02.333544   54471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:34:02.333894   54471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:34:02.333939   54471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:34:02.350695   54471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43469
	I0729 11:34:02.351332   54471 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:34:02.351888   54471 main.go:141] libmachine: Using API Version  1
	I0729 11:34:02.351913   54471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:34:02.352265   54471 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:34:02.352498   54471 main.go:141] libmachine: (pause-581851) Calling .DriverName
	I0729 11:34:02.388134   54471 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:34:02.389550   54471 start.go:297] selected driver: kvm2
	I0729 11:34:02.389564   54471 start.go:901] validating driver "kvm2" against &{Name:pause-581851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-581851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:34:02.389726   54471 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:34:02.390126   54471 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:34:02.390205   54471 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:34:02.408791   54471 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:34:02.409750   54471 cni.go:84] Creating CNI manager for ""
	I0729 11:34:02.409768   54471 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:34:02.409860   54471 start.go:340] cluster config:
	{Name:pause-581851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-581851 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:34:02.410012   54471 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:34:02.412008   54471 out.go:177] * Starting "pause-581851" primary control-plane node in "pause-581851" cluster
	I0729 11:34:02.413451   54471 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:34:02.413488   54471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 11:34:02.413500   54471 cache.go:56] Caching tarball of preloaded images
	I0729 11:34:02.413591   54471 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:34:02.413605   54471 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:34:02.413726   54471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/pause-581851/config.json ...
	I0729 11:34:02.413944   54471 start.go:360] acquireMachinesLock for pause-581851: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:34:27.007758   54471 start.go:364] duration metric: took 24.593783058s to acquireMachinesLock for "pause-581851"
	I0729 11:34:27.007812   54471 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:34:27.007822   54471 fix.go:54] fixHost starting: 
	I0729 11:34:27.008220   54471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:34:27.008266   54471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:34:27.025779   54471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34679
	I0729 11:34:27.026167   54471 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:34:27.026763   54471 main.go:141] libmachine: Using API Version  1
	I0729 11:34:27.026790   54471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:34:27.027148   54471 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:34:27.027384   54471 main.go:141] libmachine: (pause-581851) Calling .DriverName
	I0729 11:34:27.027537   54471 main.go:141] libmachine: (pause-581851) Calling .GetState
	I0729 11:34:27.029294   54471 fix.go:112] recreateIfNeeded on pause-581851: state=Running err=<nil>
	W0729 11:34:27.029316   54471 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:34:27.031684   54471 out.go:177] * Updating the running kvm2 "pause-581851" VM ...
	I0729 11:34:27.033129   54471 machine.go:94] provisionDockerMachine start ...
	I0729 11:34:27.033166   54471 main.go:141] libmachine: (pause-581851) Calling .DriverName
	I0729 11:34:27.033384   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHHostname
	I0729 11:34:27.036042   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.036606   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:27.036645   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.036846   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHPort
	I0729 11:34:27.037044   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:27.037213   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:27.037362   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHUsername
	I0729 11:34:27.037512   54471 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:27.037694   54471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 11:34:27.037705   54471 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:34:27.145023   54471 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-581851
	
	I0729 11:34:27.145083   54471 main.go:141] libmachine: (pause-581851) Calling .GetMachineName
	I0729 11:34:27.145382   54471 buildroot.go:166] provisioning hostname "pause-581851"
	I0729 11:34:27.145406   54471 main.go:141] libmachine: (pause-581851) Calling .GetMachineName
	I0729 11:34:27.145621   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHHostname
	I0729 11:34:27.148821   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.149240   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:27.149280   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.149544   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHPort
	I0729 11:34:27.149715   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:27.149886   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:27.150055   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHUsername
	I0729 11:34:27.150235   54471 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:27.150462   54471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 11:34:27.150482   54471 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-581851 && echo "pause-581851" | sudo tee /etc/hostname
	I0729 11:34:27.269412   54471 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-581851
	
	I0729 11:34:27.269441   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHHostname
	I0729 11:34:27.272635   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.273122   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:27.273156   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.273330   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHPort
	I0729 11:34:27.273528   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:27.273801   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:27.274019   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHUsername
	I0729 11:34:27.274209   54471 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:27.274459   54471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 11:34:27.274483   54471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-581851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-581851/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-581851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:34:27.390160   54471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:34:27.390189   54471 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:34:27.390213   54471 buildroot.go:174] setting up certificates
	I0729 11:34:27.390226   54471 provision.go:84] configureAuth start
	I0729 11:34:27.390239   54471 main.go:141] libmachine: (pause-581851) Calling .GetMachineName
	I0729 11:34:27.390576   54471 main.go:141] libmachine: (pause-581851) Calling .GetIP
	I0729 11:34:27.393978   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.394373   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:27.394400   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.394537   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHHostname
	I0729 11:34:27.397337   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.397761   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:27.397793   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.397929   54471 provision.go:143] copyHostCerts
	I0729 11:34:27.397995   54471 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:34:27.398008   54471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:34:27.398079   54471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:34:27.398190   54471 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:34:27.398203   54471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:34:27.398235   54471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:34:27.398311   54471 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:34:27.398321   54471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:34:27.398347   54471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:34:27.398414   54471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.pause-581851 san=[127.0.0.1 192.168.50.53 localhost minikube pause-581851]
	I0729 11:34:27.597259   54471 provision.go:177] copyRemoteCerts
	I0729 11:34:27.597323   54471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:34:27.597360   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHHostname
	I0729 11:34:27.600632   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.600973   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:27.601005   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.601236   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHPort
	I0729 11:34:27.601448   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:27.601607   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHUsername
	I0729 11:34:27.601795   54471 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/pause-581851/id_rsa Username:docker}
	I0729 11:34:27.689593   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:34:27.727971   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0729 11:34:27.769644   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:34:27.809291   54471 provision.go:87] duration metric: took 419.050749ms to configureAuth
	I0729 11:34:27.809326   54471 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:34:27.809608   54471 config.go:182] Loaded profile config "pause-581851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:27.809701   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHHostname
	I0729 11:34:27.812978   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.813566   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:27.813606   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:27.813810   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHPort
	I0729 11:34:27.814062   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:27.814254   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:27.814445   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHUsername
	I0729 11:34:27.814674   54471 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:27.814905   54471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 11:34:27.814933   54471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:34:33.438085   54471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:34:33.438109   54471 machine.go:97] duration metric: took 6.404966159s to provisionDockerMachine
	I0729 11:34:33.438126   54471 start.go:293] postStartSetup for "pause-581851" (driver="kvm2")
	I0729 11:34:33.438140   54471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:34:33.438161   54471 main.go:141] libmachine: (pause-581851) Calling .DriverName
	I0729 11:34:33.438475   54471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:34:33.438504   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHHostname
	I0729 11:34:33.731887   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:33.732485   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:33.732515   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:33.732656   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHPort
	I0729 11:34:33.732857   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:33.733006   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHUsername
	I0729 11:34:33.733168   54471 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/pause-581851/id_rsa Username:docker}
	I0729 11:34:33.815161   54471 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:34:33.819991   54471 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:34:33.820021   54471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:34:33.820096   54471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:34:33.820195   54471 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:34:33.820339   54471 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:34:33.830736   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:34:33.861424   54471 start.go:296] duration metric: took 423.282105ms for postStartSetup
	I0729 11:34:33.861467   54471 fix.go:56] duration metric: took 6.853645134s for fixHost
	I0729 11:34:33.861486   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHHostname
	I0729 11:34:33.864832   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:33.865245   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:33.865274   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:33.865505   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHPort
	I0729 11:34:33.865750   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:33.865948   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:33.866152   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHUsername
	I0729 11:34:33.866344   54471 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:33.866563   54471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 11:34:33.866578   54471 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 11:34:33.971828   54471 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252873.967430475
	
	I0729 11:34:33.971854   54471 fix.go:216] guest clock: 1722252873.967430475
	I0729 11:34:33.971864   54471 fix.go:229] Guest: 2024-07-29 11:34:33.967430475 +0000 UTC Remote: 2024-07-29 11:34:33.861470686 +0000 UTC m=+31.608051476 (delta=105.959789ms)
	I0729 11:34:33.971905   54471 fix.go:200] guest clock delta is within tolerance: 105.959789ms
	I0729 11:34:33.971913   54471 start.go:83] releasing machines lock for "pause-581851", held for 6.964123947s
	I0729 11:34:33.971947   54471 main.go:141] libmachine: (pause-581851) Calling .DriverName
	I0729 11:34:33.972228   54471 main.go:141] libmachine: (pause-581851) Calling .GetIP
	I0729 11:34:33.975648   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:33.976017   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:33.976056   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:33.976291   54471 main.go:141] libmachine: (pause-581851) Calling .DriverName
	I0729 11:34:33.976865   54471 main.go:141] libmachine: (pause-581851) Calling .DriverName
	I0729 11:34:33.977027   54471 main.go:141] libmachine: (pause-581851) Calling .DriverName
	I0729 11:34:33.977137   54471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:34:33.977174   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHHostname
	I0729 11:34:33.977235   54471 ssh_runner.go:195] Run: cat /version.json
	I0729 11:34:33.977252   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHHostname
	I0729 11:34:33.980374   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:33.980648   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:33.980686   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:33.980699   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:33.981044   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHPort
	I0729 11:34:33.981256   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:33.981516   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:33.981543   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:33.981552   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHPort
	I0729 11:34:33.981561   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHUsername
	I0729 11:34:33.981758   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHKeyPath
	I0729 11:34:33.981763   54471 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/pause-581851/id_rsa Username:docker}
	I0729 11:34:33.981895   54471 main.go:141] libmachine: (pause-581851) Calling .GetSSHUsername
	I0729 11:34:33.982030   54471 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/pause-581851/id_rsa Username:docker}
	I0729 11:34:34.079484   54471 ssh_runner.go:195] Run: systemctl --version
	I0729 11:34:34.086871   54471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:34:34.253573   54471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:34:34.261174   54471 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:34:34.261253   54471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:34:34.275747   54471 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 11:34:34.275772   54471 start.go:495] detecting cgroup driver to use...
	I0729 11:34:34.275840   54471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:34:34.299938   54471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:34:34.319625   54471 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:34:34.319691   54471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:34:34.360891   54471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:34:34.388061   54471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:34:34.543527   54471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:34:34.709992   54471 docker.go:233] disabling docker service ...
	I0729 11:34:34.710072   54471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:34:34.728647   54471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:34:34.743938   54471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:34:34.903723   54471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:34:35.071304   54471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:34:35.087124   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:34:35.109892   54471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:34:35.109963   54471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:34:35.122376   54471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:34:35.122453   54471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:34:35.138253   54471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:34:35.151925   54471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:34:35.165370   54471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:34:35.178516   54471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:34:35.190313   54471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:34:35.201993   54471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:34:35.214188   54471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:34:35.224930   54471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:34:35.236545   54471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:34:35.382838   54471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:34:36.758720   54471 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.375818757s)
	I0729 11:34:36.758752   54471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:34:36.758806   54471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:34:36.769348   54471 start.go:563] Will wait 60s for crictl version
	I0729 11:34:36.769412   54471 ssh_runner.go:195] Run: which crictl
	I0729 11:34:36.811171   54471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:34:36.928461   54471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:34:36.928551   54471 ssh_runner.go:195] Run: crio --version
	I0729 11:34:37.192006   54471 ssh_runner.go:195] Run: crio --version
	I0729 11:34:37.389121   54471 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:34:37.390921   54471 main.go:141] libmachine: (pause-581851) Calling .GetIP
	I0729 11:34:37.394618   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:37.395018   54471 main.go:141] libmachine: (pause-581851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2e:00", ip: ""} in network mk-pause-581851: {Iface:virbr3 ExpiryTime:2024-07-29 12:33:18 +0000 UTC Type:0 Mac:52:54:00:07:2e:00 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:pause-581851 Clientid:01:52:54:00:07:2e:00}
	I0729 11:34:37.395047   54471 main.go:141] libmachine: (pause-581851) DBG | domain pause-581851 has defined IP address 192.168.50.53 and MAC address 52:54:00:07:2e:00 in network mk-pause-581851
	I0729 11:34:37.395331   54471 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 11:34:37.459981   54471 kubeadm.go:883] updating cluster {Name:pause-581851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-581851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:34:37.460107   54471 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:34:37.460151   54471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:34:37.723125   54471 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:34:37.723154   54471 crio.go:433] Images already preloaded, skipping extraction
	I0729 11:34:37.723214   54471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:34:37.891816   54471 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:34:37.891843   54471 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:34:37.891853   54471 kubeadm.go:934] updating node { 192.168.50.53 8443 v1.30.3 crio true true} ...
	I0729 11:34:37.891998   54471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-581851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-581851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:34:37.892090   54471 ssh_runner.go:195] Run: crio config
	I0729 11:34:38.007214   54471 cni.go:84] Creating CNI manager for ""
	I0729 11:34:38.007240   54471 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:34:38.007256   54471 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:34:38.007290   54471 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-581851 NodeName:pause-581851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:34:38.007470   54471 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-581851"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:34:38.007539   54471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:34:38.110267   54471 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:34:38.110336   54471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:34:38.148030   54471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 11:34:38.197738   54471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:34:38.242695   54471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 11:34:38.270323   54471 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0729 11:34:38.276904   54471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:34:38.495977   54471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:34:38.515336   54471 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/pause-581851 for IP: 192.168.50.53
	I0729 11:34:38.515359   54471 certs.go:194] generating shared ca certs ...
	I0729 11:34:38.515378   54471 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:34:38.515565   54471 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:34:38.515629   54471 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:34:38.515641   54471 certs.go:256] generating profile certs ...
	I0729 11:34:38.515752   54471 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/pause-581851/client.key
	I0729 11:34:38.515844   54471 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/pause-581851/apiserver.key.23e64c64
	I0729 11:34:38.515893   54471 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/pause-581851/proxy-client.key
	I0729 11:34:38.516029   54471 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:34:38.516066   54471 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:34:38.516078   54471 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:34:38.516113   54471 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:34:38.516143   54471 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:34:38.516170   54471 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:34:38.516221   54471 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:34:38.517146   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:34:38.549969   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:34:38.586402   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:34:38.615870   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:34:38.654845   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/pause-581851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 11:34:38.685170   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/pause-581851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:34:38.713245   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/pause-581851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:34:38.751356   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/pause-581851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:34:38.778930   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:34:38.813624   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:34:38.845353   54471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:34:38.873898   54471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:34:38.893796   54471 ssh_runner.go:195] Run: openssl version
	I0729 11:34:38.943711   54471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:34:38.956683   54471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:34:38.962043   54471 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:34:38.962131   54471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:34:38.973321   54471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:34:38.985847   54471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:34:38.998873   54471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:34:39.004646   54471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:34:39.004728   54471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:34:39.012127   54471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:34:39.026459   54471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:34:39.041599   54471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:34:39.046673   54471 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:34:39.046747   54471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:34:39.061535   54471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:34:39.078287   54471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:34:39.083925   54471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:34:39.092718   54471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:34:39.103032   54471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:34:39.110529   54471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:34:39.121261   54471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:34:39.129527   54471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:34:39.137688   54471 kubeadm.go:392] StartCluster: {Name:pause-581851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-581851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:34:39.137831   54471 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:34:39.137923   54471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:34:39.218938   54471 cri.go:89] found id: "269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6"
	I0729 11:34:39.218967   54471 cri.go:89] found id: "fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606"
	I0729 11:34:39.218974   54471 cri.go:89] found id: "1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8"
	I0729 11:34:39.218980   54471 cri.go:89] found id: "02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2"
	I0729 11:34:39.218985   54471 cri.go:89] found id: "ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc"
	I0729 11:34:39.218990   54471 cri.go:89] found id: "ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5"
	I0729 11:34:39.219030   54471 cri.go:89] found id: "46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c"
	I0729 11:34:39.219044   54471 cri.go:89] found id: "2a9e43dbf0cad314eabc7066e0ddfc23d008b1295bf7de2d3890ae76ebdc8779"
	I0729 11:34:39.219052   54471 cri.go:89] found id: "44654efa8137ba72b4408b334440f2e7a6c01b81b19b238b73ef5b8d719cfbae"
	I0729 11:34:39.219062   54471 cri.go:89] found id: "9199fe3dfa0bd106bf90ed32397cbede811cb6ecb474b10097c37ee3ce50d79f"
	I0729 11:34:39.219071   54471 cri.go:89] found id: "5306108860ba1c1d3ea60cfe162e8ce223f2689df224919390b8928367e555f0"
	I0729 11:34:39.219076   54471 cri.go:89] found id: "f20459fac0b7ac8d9ea13916e2a3e05c8ddf2c2d2d2f5cb4885a140684287cf4"
	I0729 11:34:39.219085   54471 cri.go:89] found id: ""
	I0729 11:34:39.219143   54471 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-581851 -n pause-581851
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-581851 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-581851 logs -n 25: (1.729212287s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:31 UTC | 29 Jul 24 11:32 UTC |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p running-upgrade-342576           | running-upgrade-342576    | jenkins | v1.33.1 | 29 Jul 24 11:31 UTC | 29 Jul 24 11:32 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	| start   | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:33 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-941459 sudo         | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-342576           | running-upgrade-342576    | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	| start   | -p pause-581851 --memory=2048       | pause-581851              | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:34 UTC |
	|         | --install-addons=false              |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2            |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:33 UTC |
	| start   | -p cert-expiration-338366           | cert-expiration-338366    | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:34 UTC |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h             |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:33 UTC |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC |                     |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:34 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-941459 sudo         | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:33 UTC |
	| start   | -p force-systemd-env-802488         | force-systemd-env-802488  | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p pause-581851                     | pause-581851              | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:35 UTC |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-338366           | cert-expiration-338366    | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	| start   | -p auto-184479 --memory=3072        | auto-184479               | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --alsologtostderr --wait=true       |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	| start   | -p kindnet-184479                   | kindnet-184479            | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --memory=3072                       |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true       |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                  |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-802488         | force-systemd-env-802488  | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	| start   | -p calico-184479 --memory=3072      | calico-184479             | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --alsologtostderr --wait=true       |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                  |                           |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:34:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:34:48.756383   55365 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:34:48.756477   55365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:34:48.756485   55365 out.go:304] Setting ErrFile to fd 2...
	I0729 11:34:48.756489   55365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:34:48.756681   55365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:34:48.757255   55365 out.go:298] Setting JSON to false
	I0729 11:34:48.758150   55365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4635,"bootTime":1722248254,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:34:48.758204   55365 start.go:139] virtualization: kvm guest
	I0729 11:34:48.760289   55365 out.go:177] * [calico-184479] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:34:48.762411   55365 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:34:48.762434   55365 notify.go:220] Checking for updates...
	I0729 11:34:48.765252   55365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:34:48.766758   55365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:34:48.768275   55365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:34:48.769746   55365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:34:48.771310   55365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:34:48.773262   55365 config.go:182] Loaded profile config "auto-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:48.773447   55365 config.go:182] Loaded profile config "kindnet-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:48.773633   55365 config.go:182] Loaded profile config "pause-581851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:48.773790   55365 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:34:48.811123   55365 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 11:34:48.812711   55365 start.go:297] selected driver: kvm2
	I0729 11:34:48.812727   55365 start.go:901] validating driver "kvm2" against <nil>
	I0729 11:34:48.812737   55365 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:34:48.813474   55365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:34:48.813554   55365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:34:48.828871   55365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:34:48.828923   55365 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:34:48.829143   55365 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:34:48.829165   55365 cni.go:84] Creating CNI manager for "calico"
	I0729 11:34:48.829173   55365 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 11:34:48.829219   55365 start.go:340] cluster config:
	{Name:calico-184479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:34:48.829342   55365 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:34:48.831279   55365 out.go:177] * Starting "calico-184479" primary control-plane node in "calico-184479" cluster
	I0729 11:34:49.953786   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:49.954396   54676 main.go:141] libmachine: (auto-184479) DBG | unable to find current IP address of domain auto-184479 in network mk-auto-184479
	I0729 11:34:49.954418   54676 main.go:141] libmachine: (auto-184479) DBG | I0729 11:34:49.954340   55076 retry.go:31] will retry after 4.394100822s: waiting for machine to come up
	I0729 11:34:48.832759   55365 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:34:48.832793   55365 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 11:34:48.832803   55365 cache.go:56] Caching tarball of preloaded images
	I0729 11:34:48.832892   55365 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:34:48.832902   55365 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:34:48.832985   55365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/config.json ...
	I0729 11:34:48.833002   55365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/config.json: {Name:mkc9ec4c4f3eac623af233407469304c6b526181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:34:48.833125   55365 start.go:360] acquireMachinesLock for calico-184479: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:34:54.350530   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:54.351084   54676 main.go:141] libmachine: (auto-184479) DBG | unable to find current IP address of domain auto-184479 in network mk-auto-184479
	I0729 11:34:54.351108   54676 main.go:141] libmachine: (auto-184479) DBG | I0729 11:34:54.351019   55076 retry.go:31] will retry after 3.735950942s: waiting for machine to come up
	I0729 11:34:58.088969   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.089438   54676 main.go:141] libmachine: (auto-184479) Found IP for machine: 192.168.39.78
	I0729 11:34:58.089482   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has current primary IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.089495   54676 main.go:141] libmachine: (auto-184479) Reserving static IP address...
	I0729 11:34:58.089890   54676 main.go:141] libmachine: (auto-184479) DBG | unable to find host DHCP lease matching {name: "auto-184479", mac: "52:54:00:17:a6:79", ip: "192.168.39.78"} in network mk-auto-184479
	I0729 11:34:58.163962   54676 main.go:141] libmachine: (auto-184479) DBG | Getting to WaitForSSH function...
	I0729 11:34:58.164010   54676 main.go:141] libmachine: (auto-184479) Reserved static IP address: 192.168.39.78
	I0729 11:34:58.164023   54676 main.go:141] libmachine: (auto-184479) Waiting for SSH to be available...
	I0729 11:34:58.166969   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.167440   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.167474   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.167637   54676 main.go:141] libmachine: (auto-184479) DBG | Using SSH client type: external
	I0729 11:34:58.167666   54676 main.go:141] libmachine: (auto-184479) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa (-rw-------)
	I0729 11:34:58.167714   54676 main.go:141] libmachine: (auto-184479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:34:58.167729   54676 main.go:141] libmachine: (auto-184479) DBG | About to run SSH command:
	I0729 11:34:58.167762   54676 main.go:141] libmachine: (auto-184479) DBG | exit 0
	I0729 11:34:58.290805   54676 main.go:141] libmachine: (auto-184479) DBG | SSH cmd err, output: <nil>: 
	I0729 11:34:58.291065   54676 main.go:141] libmachine: (auto-184479) KVM machine creation complete!
	I0729 11:34:58.291404   54676 main.go:141] libmachine: (auto-184479) Calling .GetConfigRaw
	I0729 11:34:58.291952   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:58.292108   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:58.292293   54676 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:34:58.292309   54676 main.go:141] libmachine: (auto-184479) Calling .GetState
	I0729 11:34:58.293536   54676 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:34:58.293551   54676 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:34:58.293558   54676 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:34:58.293566   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.295984   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.296355   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.296377   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.296512   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:58.296657   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.296819   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.296993   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:58.297163   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:58.297356   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:58.297366   54676 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:34:58.402148   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:34:58.402179   54676 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:34:58.402192   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.404888   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.405292   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.405318   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.405438   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:58.405623   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.405793   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.405911   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:58.406086   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:58.406247   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:58.406258   54676 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:34:58.511418   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:34:58.511528   54676 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:34:58.511545   54676 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:34:58.511556   54676 main.go:141] libmachine: (auto-184479) Calling .GetMachineName
	I0729 11:34:58.511796   54676 buildroot.go:166] provisioning hostname "auto-184479"
	I0729 11:34:58.511820   54676 main.go:141] libmachine: (auto-184479) Calling .GetMachineName
	I0729 11:34:58.511995   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.514732   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.515221   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.515245   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.515478   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:58.515654   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.515810   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.515949   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:58.516311   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:58.516533   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:58.516548   54676 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-184479 && echo "auto-184479" | sudo tee /etc/hostname
	I0729 11:34:58.637990   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-184479
	
	I0729 11:34:58.638024   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.641106   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.641454   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.641485   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.641728   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:58.642011   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.642207   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.642368   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:58.642554   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:58.642779   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:58.642803   54676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-184479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-184479/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-184479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:34:59.588035   55120 start.go:364] duration metric: took 22.904192673s to acquireMachinesLock for "kindnet-184479"
	I0729 11:34:59.588096   55120 start.go:93] Provisioning new machine with config: &{Name:kindnet-184479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:kindnet-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:34:59.588277   55120 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 11:34:58.756024   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:34:58.756051   54676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:34:58.756071   54676 buildroot.go:174] setting up certificates
	I0729 11:34:58.756083   54676 provision.go:84] configureAuth start
	I0729 11:34:58.756095   54676 main.go:141] libmachine: (auto-184479) Calling .GetMachineName
	I0729 11:34:58.756402   54676 main.go:141] libmachine: (auto-184479) Calling .GetIP
	I0729 11:34:58.759029   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.759335   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.759362   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.759504   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.761677   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.762057   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.762087   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.762236   54676 provision.go:143] copyHostCerts
	I0729 11:34:58.762290   54676 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:34:58.762303   54676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:34:58.762387   54676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:34:58.762505   54676 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:34:58.762517   54676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:34:58.762548   54676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:34:58.762638   54676 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:34:58.762648   54676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:34:58.762674   54676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:34:58.762780   54676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.auto-184479 san=[127.0.0.1 192.168.39.78 auto-184479 localhost minikube]
	I0729 11:34:58.902280   54676 provision.go:177] copyRemoteCerts
	I0729 11:34:58.902335   54676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:34:58.902361   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.904858   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.905196   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.905237   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.905383   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:58.905571   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.905724   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:58.905853   54676 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa Username:docker}
	I0729 11:34:58.989419   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:34:59.015031   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:34:59.043074   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0729 11:34:59.070539   54676 provision.go:87] duration metric: took 314.443173ms to configureAuth
	I0729 11:34:59.070571   54676 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:34:59.070754   54676 config.go:182] Loaded profile config "auto-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:59.070818   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:59.073440   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.073766   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.073795   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.073949   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:59.074156   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.074307   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.074437   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:59.074610   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:59.074813   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:59.074834   54676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:34:59.342345   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:34:59.342377   54676 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:34:59.342388   54676 main.go:141] libmachine: (auto-184479) Calling .GetURL
	I0729 11:34:59.343710   54676 main.go:141] libmachine: (auto-184479) DBG | Using libvirt version 6000000
	I0729 11:34:59.345744   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.346102   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.346137   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.346286   54676 main.go:141] libmachine: Docker is up and running!
	I0729 11:34:59.346304   54676 main.go:141] libmachine: Reticulating splines...
	I0729 11:34:59.346311   54676 client.go:171] duration metric: took 25.286758221s to LocalClient.Create
	I0729 11:34:59.346333   54676 start.go:167] duration metric: took 25.286818391s to libmachine.API.Create "auto-184479"
	I0729 11:34:59.346341   54676 start.go:293] postStartSetup for "auto-184479" (driver="kvm2")
	I0729 11:34:59.346350   54676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:34:59.346371   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:59.346588   54676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:34:59.346611   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:59.348558   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.348839   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.348871   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.348981   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:59.349149   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.349297   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:59.349451   54676 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa Username:docker}
	I0729 11:34:59.433874   54676 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:34:59.438615   54676 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:34:59.438639   54676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:34:59.438687   54676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:34:59.438789   54676 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:34:59.438874   54676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:34:59.448605   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:34:59.473403   54676 start.go:296] duration metric: took 127.048945ms for postStartSetup
	I0729 11:34:59.473462   54676 main.go:141] libmachine: (auto-184479) Calling .GetConfigRaw
	I0729 11:34:59.474054   54676 main.go:141] libmachine: (auto-184479) Calling .GetIP
	I0729 11:34:59.476744   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.477088   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.477105   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.477396   54676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/config.json ...
	I0729 11:34:59.477567   54676 start.go:128] duration metric: took 25.505299333s to createHost
	I0729 11:34:59.477593   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:59.479814   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.480127   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.480159   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.480368   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:59.480547   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.480740   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.480881   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:59.481080   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:59.481347   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:59.481369   54676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:34:59.587862   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252899.560353550
	
	I0729 11:34:59.587886   54676 fix.go:216] guest clock: 1722252899.560353550
	I0729 11:34:59.587905   54676 fix.go:229] Guest: 2024-07-29 11:34:59.56035355 +0000 UTC Remote: 2024-07-29 11:34:59.477580142 +0000 UTC m=+50.787512074 (delta=82.773408ms)
	I0729 11:34:59.587931   54676 fix.go:200] guest clock delta is within tolerance: 82.773408ms
	I0729 11:34:59.587941   54676 start.go:83] releasing machines lock for "auto-184479", held for 25.615879857s
	I0729 11:34:59.587974   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:59.588270   54676 main.go:141] libmachine: (auto-184479) Calling .GetIP
	I0729 11:34:59.591070   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.591491   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.591522   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.591708   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:59.592333   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:59.592551   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:59.592650   54676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:34:59.592689   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:59.592776   54676 ssh_runner.go:195] Run: cat /version.json
	I0729 11:34:59.592817   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:59.595683   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.595903   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.595995   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.596019   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.596204   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:59.596312   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.596333   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.596375   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.596574   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:59.596578   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:59.596747   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.596754   54676 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa Username:docker}
	I0729 11:34:59.596872   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:59.597004   54676 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa Username:docker}
	I0729 11:34:59.701108   54676 ssh_runner.go:195] Run: systemctl --version
	I0729 11:34:59.708963   54676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:34:59.890412   54676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:34:59.897427   54676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:34:59.897502   54676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:34:59.914296   54676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:34:59.914324   54676 start.go:495] detecting cgroup driver to use...
	I0729 11:34:59.914398   54676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:34:59.936357   54676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:34:59.954461   54676 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:34:59.954540   54676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:34:59.969469   54676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:34:59.983091   54676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:35:00.110819   54676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:35:00.267438   54676 docker.go:233] disabling docker service ...
	I0729 11:35:00.267491   54676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:35:00.289826   54676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:35:00.307310   54676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:35:00.453877   54676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:35:00.583813   54676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:35:00.599378   54676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:35:00.620527   54676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:35:00.620589   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.631493   54676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:35:00.631556   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.642631   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.653733   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.664969   54676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:35:00.676195   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.687276   54676 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.713604   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.728556   54676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:35:00.740911   54676 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:35:00.741055   54676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:35:00.756249   54676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:35:00.769227   54676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:35:00.919185   54676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:35:01.076833   54676 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:35:01.076907   54676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:35:01.082461   54676 start.go:563] Will wait 60s for crictl version
	I0729 11:35:01.082556   54676 ssh_runner.go:195] Run: which crictl
	I0729 11:35:01.086591   54676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:35:01.134877   54676 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:35:01.134962   54676 ssh_runner.go:195] Run: crio --version
	I0729 11:35:01.177552   54676 ssh_runner.go:195] Run: crio --version
	I0729 11:35:01.217832   54676 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:34:59.437826   54471 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6 fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606 1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8 02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2 ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5 46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c 2a9e43dbf0cad314eabc7066e0ddfc23d008b1295bf7de2d3890ae76ebdc8779 44654efa8137ba72b4408b334440f2e7a6c01b81b19b238b73ef5b8d719cfbae 9199fe3dfa0bd106bf90ed32397cbede811cb6ecb474b10097c37ee3ce50d79f 5306108860ba1c1d3ea60cfe162e8ce223f2689df224919390b8928367e555f0 f20459fac0b7ac8d9ea13916e2a3e05c8ddf2c2d2d2f5cb4885a140684287cf4: (20.040569015s)
	W0729 11:34:59.437902   54471 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6 fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606 1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8 02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2 ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5 46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c 2a9e43dbf0cad314eabc7066e0ddfc23d008b1295bf7de2d3890ae76ebdc8779 44654efa8137ba72b4408b334440f2e7a6c01b81b19b238b73ef5b8d719cfbae 9199fe3dfa0bd106bf90ed32397cbede811cb6ecb474b10097c37ee3ce50d79f 5306108860ba1c1d3ea60cfe162e8ce223f2689df224919390b8928367e555f0 f20459fac0b7ac8d9ea13916e2a3e05c8ddf2c2d2d2f5cb4885a140684287cf4: Process exited with status 1
	stdout:
	269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6
	fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606
	1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8
	02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2
	ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc
	ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5
	
	stderr:
	E0729 11:34:59.431503    2935 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c\": container with ID starting with 46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c not found: ID does not exist" containerID="46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c"
	time="2024-07-29T11:34:59Z" level=fatal msg="stopping the container \"46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c\": rpc error: code = NotFound desc = could not find container \"46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c\": container with ID starting with 46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c not found: ID does not exist"
	I0729 11:34:59.437999   54471 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:34:59.484848   54471 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:34:59.497340   54471 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 29 11:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 29 11:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 29 11:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 29 11:33 /etc/kubernetes/scheduler.conf
	
	I0729 11:34:59.497397   54471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:34:59.508800   54471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:34:59.520154   54471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:34:59.529887   54471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:34:59.529943   54471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:34:59.539950   54471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:34:59.551107   54471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:34:59.551173   54471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:34:59.561108   54471 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:34:59.571450   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:34:59.646222   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:35:01.130026   54471 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.483765391s)
	I0729 11:35:01.130098   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:35:01.396166   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:35:01.481956   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:35:01.719923   54471 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:35:01.720021   54471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:35:02.220840   54471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:35:01.219598   54676 main.go:141] libmachine: (auto-184479) Calling .GetIP
	I0729 11:35:01.222738   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:35:01.223189   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:35:01.223250   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:35:01.223473   54676 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:35:01.231679   54676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:35:01.252941   54676 kubeadm.go:883] updating cluster {Name:auto-184479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:auto-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:35:01.253118   54676 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:35:01.253689   54676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:35:01.292431   54676 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:35:01.292518   54676 ssh_runner.go:195] Run: which lz4
	I0729 11:35:01.298863   54676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:35:01.305924   54676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:35:01.305963   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:35:02.964847   54676 crio.go:462] duration metric: took 1.66603819s to copy over tarball
	I0729 11:35:02.964948   54676 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:34:59.590468   55120 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 11:34:59.590661   55120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:34:59.590736   55120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:34:59.610471   55120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I0729 11:34:59.610996   55120 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:34:59.611635   55120 main.go:141] libmachine: Using API Version  1
	I0729 11:34:59.611680   55120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:34:59.612045   55120 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:34:59.612230   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetMachineName
	I0729 11:34:59.612421   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:34:59.612613   55120 start.go:159] libmachine.API.Create for "kindnet-184479" (driver="kvm2")
	I0729 11:34:59.612644   55120 client.go:168] LocalClient.Create starting
	I0729 11:34:59.612702   55120 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 11:34:59.612744   55120 main.go:141] libmachine: Decoding PEM data...
	I0729 11:34:59.612767   55120 main.go:141] libmachine: Parsing certificate...
	I0729 11:34:59.612845   55120 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 11:34:59.612873   55120 main.go:141] libmachine: Decoding PEM data...
	I0729 11:34:59.612900   55120 main.go:141] libmachine: Parsing certificate...
	I0729 11:34:59.612926   55120 main.go:141] libmachine: Running pre-create checks...
	I0729 11:34:59.612942   55120 main.go:141] libmachine: (kindnet-184479) Calling .PreCreateCheck
	I0729 11:34:59.613364   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetConfigRaw
	I0729 11:34:59.613806   55120 main.go:141] libmachine: Creating machine...
	I0729 11:34:59.613832   55120 main.go:141] libmachine: (kindnet-184479) Calling .Create
	I0729 11:34:59.613983   55120 main.go:141] libmachine: (kindnet-184479) Creating KVM machine...
	I0729 11:34:59.615552   55120 main.go:141] libmachine: (kindnet-184479) DBG | found existing default KVM network
	I0729 11:34:59.616745   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:34:59.616551   55451 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a8:3b:42} reservation:<nil>}
	I0729 11:34:59.617432   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:34:59.617334   55451 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:0d:5d:21} reservation:<nil>}
	I0729 11:34:59.618621   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:34:59.618509   55451 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030a000}
	I0729 11:34:59.618722   55120 main.go:141] libmachine: (kindnet-184479) DBG | created network xml: 
	I0729 11:34:59.618744   55120 main.go:141] libmachine: (kindnet-184479) DBG | <network>
	I0729 11:34:59.618755   55120 main.go:141] libmachine: (kindnet-184479) DBG |   <name>mk-kindnet-184479</name>
	I0729 11:34:59.618766   55120 main.go:141] libmachine: (kindnet-184479) DBG |   <dns enable='no'/>
	I0729 11:34:59.618775   55120 main.go:141] libmachine: (kindnet-184479) DBG |   
	I0729 11:34:59.618784   55120 main.go:141] libmachine: (kindnet-184479) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0729 11:34:59.618805   55120 main.go:141] libmachine: (kindnet-184479) DBG |     <dhcp>
	I0729 11:34:59.618820   55120 main.go:141] libmachine: (kindnet-184479) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0729 11:34:59.618830   55120 main.go:141] libmachine: (kindnet-184479) DBG |     </dhcp>
	I0729 11:34:59.618836   55120 main.go:141] libmachine: (kindnet-184479) DBG |   </ip>
	I0729 11:34:59.618844   55120 main.go:141] libmachine: (kindnet-184479) DBG |   
	I0729 11:34:59.618850   55120 main.go:141] libmachine: (kindnet-184479) DBG | </network>
	I0729 11:34:59.618861   55120 main.go:141] libmachine: (kindnet-184479) DBG | 
	I0729 11:34:59.624665   55120 main.go:141] libmachine: (kindnet-184479) DBG | trying to create private KVM network mk-kindnet-184479 192.168.61.0/24...
	I0729 11:34:59.696581   55120 main.go:141] libmachine: (kindnet-184479) DBG | private KVM network mk-kindnet-184479 192.168.61.0/24 created
	I0729 11:34:59.696614   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:34:59.696538   55451 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:34:59.696627   55120 main.go:141] libmachine: (kindnet-184479) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479 ...
	I0729 11:34:59.696659   55120 main.go:141] libmachine: (kindnet-184479) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 11:34:59.696776   55120 main.go:141] libmachine: (kindnet-184479) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 11:34:59.958470   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:34:59.958348   55451 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa...
	I0729 11:35:00.071095   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:00.070945   55451 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/kindnet-184479.rawdisk...
	I0729 11:35:00.071150   55120 main.go:141] libmachine: (kindnet-184479) DBG | Writing magic tar header
	I0729 11:35:00.071167   55120 main.go:141] libmachine: (kindnet-184479) DBG | Writing SSH key tar header
	I0729 11:35:00.071181   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:00.071100   55451 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479 ...
	I0729 11:35:00.071245   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479
	I0729 11:35:00.071332   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479 (perms=drwx------)
	I0729 11:35:00.071353   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 11:35:00.071365   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 11:35:00.071376   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:35:00.071390   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 11:35:00.071408   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 11:35:00.071422   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 11:35:00.071432   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 11:35:00.071503   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 11:35:00.071536   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 11:35:00.071548   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins
	I0729 11:35:00.071562   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home
	I0729 11:35:00.071572   55120 main.go:141] libmachine: (kindnet-184479) DBG | Skipping /home - not owner
	I0729 11:35:00.071592   55120 main.go:141] libmachine: (kindnet-184479) Creating domain...
	I0729 11:35:00.072600   55120 main.go:141] libmachine: (kindnet-184479) define libvirt domain using xml: 
	I0729 11:35:00.072621   55120 main.go:141] libmachine: (kindnet-184479) <domain type='kvm'>
	I0729 11:35:00.072632   55120 main.go:141] libmachine: (kindnet-184479)   <name>kindnet-184479</name>
	I0729 11:35:00.072640   55120 main.go:141] libmachine: (kindnet-184479)   <memory unit='MiB'>3072</memory>
	I0729 11:35:00.072649   55120 main.go:141] libmachine: (kindnet-184479)   <vcpu>2</vcpu>
	I0729 11:35:00.072666   55120 main.go:141] libmachine: (kindnet-184479)   <features>
	I0729 11:35:00.072676   55120 main.go:141] libmachine: (kindnet-184479)     <acpi/>
	I0729 11:35:00.072692   55120 main.go:141] libmachine: (kindnet-184479)     <apic/>
	I0729 11:35:00.072712   55120 main.go:141] libmachine: (kindnet-184479)     <pae/>
	I0729 11:35:00.072736   55120 main.go:141] libmachine: (kindnet-184479)     
	I0729 11:35:00.072748   55120 main.go:141] libmachine: (kindnet-184479)   </features>
	I0729 11:35:00.072756   55120 main.go:141] libmachine: (kindnet-184479)   <cpu mode='host-passthrough'>
	I0729 11:35:00.072765   55120 main.go:141] libmachine: (kindnet-184479)   
	I0729 11:35:00.072773   55120 main.go:141] libmachine: (kindnet-184479)   </cpu>
	I0729 11:35:00.072784   55120 main.go:141] libmachine: (kindnet-184479)   <os>
	I0729 11:35:00.072794   55120 main.go:141] libmachine: (kindnet-184479)     <type>hvm</type>
	I0729 11:35:00.072810   55120 main.go:141] libmachine: (kindnet-184479)     <boot dev='cdrom'/>
	I0729 11:35:00.072826   55120 main.go:141] libmachine: (kindnet-184479)     <boot dev='hd'/>
	I0729 11:35:00.072839   55120 main.go:141] libmachine: (kindnet-184479)     <bootmenu enable='no'/>
	I0729 11:35:00.072847   55120 main.go:141] libmachine: (kindnet-184479)   </os>
	I0729 11:35:00.072863   55120 main.go:141] libmachine: (kindnet-184479)   <devices>
	I0729 11:35:00.072880   55120 main.go:141] libmachine: (kindnet-184479)     <disk type='file' device='cdrom'>
	I0729 11:35:00.072906   55120 main.go:141] libmachine: (kindnet-184479)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/boot2docker.iso'/>
	I0729 11:35:00.072919   55120 main.go:141] libmachine: (kindnet-184479)       <target dev='hdc' bus='scsi'/>
	I0729 11:35:00.072929   55120 main.go:141] libmachine: (kindnet-184479)       <readonly/>
	I0729 11:35:00.072940   55120 main.go:141] libmachine: (kindnet-184479)     </disk>
	I0729 11:35:00.072953   55120 main.go:141] libmachine: (kindnet-184479)     <disk type='file' device='disk'>
	I0729 11:35:00.072962   55120 main.go:141] libmachine: (kindnet-184479)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 11:35:00.072978   55120 main.go:141] libmachine: (kindnet-184479)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/kindnet-184479.rawdisk'/>
	I0729 11:35:00.072988   55120 main.go:141] libmachine: (kindnet-184479)       <target dev='hda' bus='virtio'/>
	I0729 11:35:00.073000   55120 main.go:141] libmachine: (kindnet-184479)     </disk>
	I0729 11:35:00.073008   55120 main.go:141] libmachine: (kindnet-184479)     <interface type='network'>
	I0729 11:35:00.073017   55120 main.go:141] libmachine: (kindnet-184479)       <source network='mk-kindnet-184479'/>
	I0729 11:35:00.073031   55120 main.go:141] libmachine: (kindnet-184479)       <model type='virtio'/>
	I0729 11:35:00.073044   55120 main.go:141] libmachine: (kindnet-184479)     </interface>
	I0729 11:35:00.073054   55120 main.go:141] libmachine: (kindnet-184479)     <interface type='network'>
	I0729 11:35:00.073063   55120 main.go:141] libmachine: (kindnet-184479)       <source network='default'/>
	I0729 11:35:00.073073   55120 main.go:141] libmachine: (kindnet-184479)       <model type='virtio'/>
	I0729 11:35:00.073084   55120 main.go:141] libmachine: (kindnet-184479)     </interface>
	I0729 11:35:00.073093   55120 main.go:141] libmachine: (kindnet-184479)     <serial type='pty'>
	I0729 11:35:00.073111   55120 main.go:141] libmachine: (kindnet-184479)       <target port='0'/>
	I0729 11:35:00.073127   55120 main.go:141] libmachine: (kindnet-184479)     </serial>
	I0729 11:35:00.073139   55120 main.go:141] libmachine: (kindnet-184479)     <console type='pty'>
	I0729 11:35:00.073149   55120 main.go:141] libmachine: (kindnet-184479)       <target type='serial' port='0'/>
	I0729 11:35:00.073160   55120 main.go:141] libmachine: (kindnet-184479)     </console>
	I0729 11:35:00.073168   55120 main.go:141] libmachine: (kindnet-184479)     <rng model='virtio'>
	I0729 11:35:00.073179   55120 main.go:141] libmachine: (kindnet-184479)       <backend model='random'>/dev/random</backend>
	I0729 11:35:00.073186   55120 main.go:141] libmachine: (kindnet-184479)     </rng>
	I0729 11:35:00.073212   55120 main.go:141] libmachine: (kindnet-184479)     
	I0729 11:35:00.073234   55120 main.go:141] libmachine: (kindnet-184479)     
	I0729 11:35:00.073246   55120 main.go:141] libmachine: (kindnet-184479)   </devices>
	I0729 11:35:00.073256   55120 main.go:141] libmachine: (kindnet-184479) </domain>
	I0729 11:35:00.073266   55120 main.go:141] libmachine: (kindnet-184479) 
	I0729 11:35:00.077575   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:10:d4:f3 in network default
	I0729 11:35:00.078121   55120 main.go:141] libmachine: (kindnet-184479) Ensuring networks are active...
	I0729 11:35:00.078137   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:00.078872   55120 main.go:141] libmachine: (kindnet-184479) Ensuring network default is active
	I0729 11:35:00.079155   55120 main.go:141] libmachine: (kindnet-184479) Ensuring network mk-kindnet-184479 is active
	I0729 11:35:00.079626   55120 main.go:141] libmachine: (kindnet-184479) Getting domain xml...
	I0729 11:35:00.080270   55120 main.go:141] libmachine: (kindnet-184479) Creating domain...
	I0729 11:35:01.381264   55120 main.go:141] libmachine: (kindnet-184479) Waiting to get IP...
	I0729 11:35:01.382042   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:01.382457   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:01.382514   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:01.382451   55451 retry.go:31] will retry after 251.500731ms: waiting for machine to come up
	I0729 11:35:01.635981   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:01.637037   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:01.637065   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:01.636950   55451 retry.go:31] will retry after 285.308466ms: waiting for machine to come up
	I0729 11:35:01.923724   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:01.924364   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:01.924392   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:01.924315   55451 retry.go:31] will retry after 336.487987ms: waiting for machine to come up
	I0729 11:35:02.262967   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:02.263621   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:02.263657   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:02.263520   55451 retry.go:31] will retry after 546.810498ms: waiting for machine to come up
	I0729 11:35:02.812347   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:02.812810   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:02.812836   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:02.812773   55451 retry.go:31] will retry after 556.820256ms: waiting for machine to come up
	I0729 11:35:03.371563   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:03.372067   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:03.372097   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:03.371997   55451 retry.go:31] will retry after 839.666439ms: waiting for machine to come up
	I0729 11:35:04.213162   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:04.213667   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:04.213700   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:04.213609   55451 retry.go:31] will retry after 831.209735ms: waiting for machine to come up
	I0729 11:35:02.720248   54471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:35:02.772992   54471 api_server.go:72] duration metric: took 1.053069332s to wait for apiserver process to appear ...
	I0729 11:35:02.773020   54471 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:35:02.773047   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:05.423503   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:35:05.423539   54471 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:35:05.423555   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:05.434759   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:35:05.434793   54471 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:35:05.773162   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:05.779279   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:35:05.779310   54471 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:35:06.273951   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:06.279939   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:35:06.279962   54471 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:35:06.773198   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:06.779745   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:35:06.779768   54471 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:35:07.273925   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:07.279513   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 11:35:07.286405   54471 api_server.go:141] control plane version: v1.30.3
	I0729 11:35:07.286440   54471 api_server.go:131] duration metric: took 4.513412734s to wait for apiserver health ...
	I0729 11:35:07.286452   54471 cni.go:84] Creating CNI manager for ""
	I0729 11:35:07.286462   54471 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:35:07.471446   54471 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:35:05.735820   54676 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.770846279s)
	I0729 11:35:05.735849   54676 crio.go:469] duration metric: took 2.770952258s to extract the tarball
	I0729 11:35:05.735859   54676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:35:05.782200   54676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:35:05.837344   54676 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:35:05.837364   54676 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:35:05.837373   54676 kubeadm.go:934] updating node { 192.168.39.78 8443 v1.30.3 crio true true} ...
	I0729 11:35:05.837482   54676 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-184479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:auto-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:35:05.837570   54676 ssh_runner.go:195] Run: crio config
	I0729 11:35:05.882600   54676 cni.go:84] Creating CNI manager for ""
	I0729 11:35:05.882622   54676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:35:05.882633   54676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:35:05.882655   54676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-184479 NodeName:auto-184479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:35:05.882855   54676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-184479"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:35:05.882936   54676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:35:05.893016   54676 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:35:05.893090   54676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:35:05.903442   54676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0729 11:35:05.920790   54676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:35:05.937622   54676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0729 11:35:05.955864   54676 ssh_runner.go:195] Run: grep 192.168.39.78	control-plane.minikube.internal$ /etc/hosts
	I0729 11:35:05.960526   54676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:35:05.973924   54676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:35:06.105218   54676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:35:06.122693   54676 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479 for IP: 192.168.39.78
	I0729 11:35:06.122728   54676 certs.go:194] generating shared ca certs ...
	I0729 11:35:06.122753   54676 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.122923   54676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:35:06.122976   54676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:35:06.122988   54676 certs.go:256] generating profile certs ...
	I0729 11:35:06.123053   54676 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.key
	I0729 11:35:06.123070   54676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt with IP's: []
	I0729 11:35:06.268361   54676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt ...
	I0729 11:35:06.268388   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: {Name:mk441be0f438f4cc71c9daed3645b6bb59ec29e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.268551   54676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.key ...
	I0729 11:35:06.268561   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.key: {Name:mkb514f0cef69973e786a2d311c2505eb21ab3cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.268637   54676 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key.4a0b0a29
	I0729 11:35:06.268651   54676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt.4a0b0a29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78]
	I0729 11:35:06.577069   54676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt.4a0b0a29 ...
	I0729 11:35:06.577098   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt.4a0b0a29: {Name:mka5fb2b0572786db33d01b13abf5cdf5d406751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.577290   54676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key.4a0b0a29 ...
	I0729 11:35:06.577307   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key.4a0b0a29: {Name:mk8dd01ce37394dbab647df18ec9ea942c84b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.577401   54676 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt.4a0b0a29 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt
	I0729 11:35:06.577498   54676 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key.4a0b0a29 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key
	I0729 11:35:06.577554   54676 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.key
	I0729 11:35:06.577568   54676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.crt with IP's: []
	I0729 11:35:06.669314   54676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.crt ...
	I0729 11:35:06.669341   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.crt: {Name:mkcc8cc24796d50a0f84ce77a4defd101d589c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.749498   54676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.key ...
	I0729 11:35:06.749530   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.key: {Name:mk4f8d1cbd9bc0aed6b3ba0ba9c23d1c6fef85ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.749800   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:35:06.749859   54676 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:35:06.749869   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:35:06.749900   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:35:06.749932   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:35:06.749963   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:35:06.750021   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:35:06.750814   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:35:06.777801   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:35:06.807185   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:35:06.856404   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:35:06.890114   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0729 11:35:06.919087   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:35:06.946491   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:35:06.973887   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:35:07.001577   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:35:07.030874   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:35:07.061191   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:35:07.090051   54676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:35:07.108372   54676 ssh_runner.go:195] Run: openssl version
	I0729 11:35:07.115154   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:35:07.127327   54676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:35:07.132931   54676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:35:07.133023   54676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:35:07.141065   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:35:07.157731   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:35:07.171241   54676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:35:07.176767   54676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:35:07.176858   54676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:35:07.183533   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:35:07.195718   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:35:07.208469   54676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:35:07.213945   54676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:35:07.214020   54676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:35:07.222252   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:35:07.238415   54676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:35:07.244705   54676 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:35:07.244766   54676 kubeadm.go:392] StartCluster: {Name:auto-184479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:auto-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:35:07.244846   54676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:35:07.244915   54676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:35:07.291535   54676 cri.go:89] found id: ""
	I0729 11:35:07.291617   54676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:35:07.302113   54676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:35:07.312323   54676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:35:07.322593   54676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:35:07.322614   54676 kubeadm.go:157] found existing configuration files:
	
	I0729 11:35:07.322659   54676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:35:07.332200   54676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:35:07.332258   54676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:35:07.342383   54676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:35:07.353207   54676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:35:07.353260   54676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:35:07.365201   54676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:35:07.375763   54676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:35:07.375830   54676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:35:07.386291   54676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:35:07.396187   54676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:35:07.396269   54676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:35:07.406330   54676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:35:07.477697   54676 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:35:07.477752   54676 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:35:07.654939   54676 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:35:07.655101   54676 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:35:07.655214   54676 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:35:07.883739   54676 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:35:08.000194   54676 out.go:204]   - Generating certificates and keys ...
	I0729 11:35:08.000325   54676 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:35:08.000406   54676 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:35:08.029753   54676 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 11:35:08.108140   54676 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 11:35:08.193580   54676 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 11:35:08.310599   54676 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 11:35:08.590874   54676 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 11:35:08.591176   54676 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-184479 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I0729 11:35:08.672484   54676 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 11:35:08.672835   54676 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-184479 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I0729 11:35:05.046490   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:05.047055   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:05.047085   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:05.047013   55451 retry.go:31] will retry after 1.299032255s: waiting for machine to come up
	I0729 11:35:06.348125   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:06.348512   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:06.348541   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:06.348465   55451 retry.go:31] will retry after 1.740256381s: waiting for machine to come up
	I0729 11:35:08.090046   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:08.090559   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:08.090591   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:08.090502   55451 retry.go:31] will retry after 2.171003514s: waiting for machine to come up
	I0729 11:35:08.762967   54676 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 11:35:09.107262   54676 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 11:35:09.280195   54676 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 11:35:09.280466   54676 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:35:09.459763   54676 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:35:09.560711   54676 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:35:09.813353   54676 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:35:10.015067   54676 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:35:10.212203   54676 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:35:10.212929   54676 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:35:10.215766   54676 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:35:07.569715   54471 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:35:07.584803   54471 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:35:07.613560   54471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:35:08.479265   54471 system_pods.go:59] 6 kube-system pods found
	I0729 11:35:08.479325   54471 system_pods.go:61] "coredns-7db6d8ff4d-fmbbt" [ee1d727f-0f74-4d9d-b25c-f2a885c5d965] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:35:08.479337   54471 system_pods.go:61] "etcd-pause-581851" [813e6429-8327-4b00-9b30-a3cf17beb72c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:35:08.479357   54471 system_pods.go:61] "kube-apiserver-pause-581851" [b6c94874-ccbf-4169-9f8b-504b0e97c887] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:35:08.479371   54471 system_pods.go:61] "kube-controller-manager-pause-581851" [2a916bb9-ea90-47d1-9343-2774c1d2f74c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:35:08.479382   54471 system_pods.go:61] "kube-proxy-9c8zc" [a92ad38f-257a-4364-a81c-2b6bfcb3150c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:35:08.479390   54471 system_pods.go:61] "kube-scheduler-pause-581851" [b37b3ec1-e38b-4e63-a41e-ab904d9d1246] Running
	I0729 11:35:08.479400   54471 system_pods.go:74] duration metric: took 865.816731ms to wait for pod list to return data ...
	I0729 11:35:08.479412   54471 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:35:08.871909   54471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:35:08.871951   54471 node_conditions.go:123] node cpu capacity is 2
	I0729 11:35:08.871966   54471 node_conditions.go:105] duration metric: took 392.547456ms to run NodePressure ...
	I0729 11:35:08.871994   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:35:09.524230   54471 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:35:09.533772   54471 kubeadm.go:739] kubelet initialised
	I0729 11:35:09.533800   54471 kubeadm.go:740] duration metric: took 9.543229ms waiting for restarted kubelet to initialise ...
	I0729 11:35:09.533810   54471 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:35:09.540269   54471 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:11.554242   54471 pod_ready.go:102] pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace has status "Ready":"False"
	I0729 11:35:12.050419   54471 pod_ready.go:92] pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:12.050451   54471 pod_ready.go:81] duration metric: took 2.510153387s for pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:12.050463   54471 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:10.217673   54676 out.go:204]   - Booting up control plane ...
	I0729 11:35:10.217792   54676 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:35:10.217898   54676 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:35:10.218010   54676 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:35:10.243731   54676 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:35:10.244843   54676 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:35:10.244937   54676 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:35:10.372396   54676 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:35:10.372498   54676 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:35:11.372815   54676 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001126854s
	I0729 11:35:11.372975   54676 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:35:10.262974   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:10.263544   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:10.263584   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:10.263496   55451 retry.go:31] will retry after 2.411239645s: waiting for machine to come up
	I0729 11:35:12.676394   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:12.677030   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:12.677061   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:12.676967   55451 retry.go:31] will retry after 2.577129835s: waiting for machine to come up
	I0729 11:35:16.373273   54676 kubeadm.go:310] [api-check] The API server is healthy after 5.001948251s
	I0729 11:35:16.386417   54676 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:35:16.400103   54676 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:35:16.434865   54676 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:35:16.435137   54676 kubeadm.go:310] [mark-control-plane] Marking the node auto-184479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:35:16.447755   54676 kubeadm.go:310] [bootstrap-token] Using token: zuvxcj.n00cmdazyfmsft3c
	I0729 11:35:14.057608   54471 pod_ready.go:102] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"False"
	I0729 11:35:16.057758   54471 pod_ready.go:102] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"False"
	I0729 11:35:16.449323   54676 out.go:204]   - Configuring RBAC rules ...
	I0729 11:35:16.449475   54676 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:35:16.456943   54676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:35:16.464635   54676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:35:16.467894   54676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:35:16.472032   54676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:35:16.480543   54676 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:35:16.779046   54676 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:35:17.211146   54676 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:35:17.777315   54676 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:35:17.778809   54676 kubeadm.go:310] 
	I0729 11:35:17.778896   54676 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:35:17.778924   54676 kubeadm.go:310] 
	I0729 11:35:17.779032   54676 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:35:17.779048   54676 kubeadm.go:310] 
	I0729 11:35:17.779098   54676 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:35:17.779185   54676 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:35:17.779257   54676 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:35:17.779269   54676 kubeadm.go:310] 
	I0729 11:35:17.779313   54676 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:35:17.779319   54676 kubeadm.go:310] 
	I0729 11:35:17.779359   54676 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:35:17.779365   54676 kubeadm.go:310] 
	I0729 11:35:17.779408   54676 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:35:17.779489   54676 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:35:17.779565   54676 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:35:17.779575   54676 kubeadm.go:310] 
	I0729 11:35:17.779665   54676 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:35:17.779771   54676 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:35:17.779788   54676 kubeadm.go:310] 
	I0729 11:35:17.779946   54676 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zuvxcj.n00cmdazyfmsft3c \
	I0729 11:35:17.780085   54676 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:35:17.780133   54676 kubeadm.go:310] 	--control-plane 
	I0729 11:35:17.780142   54676 kubeadm.go:310] 
	I0729 11:35:17.780243   54676 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:35:17.780252   54676 kubeadm.go:310] 
	I0729 11:35:17.780350   54676 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zuvxcj.n00cmdazyfmsft3c \
	I0729 11:35:17.780486   54676 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:35:17.780858   54676 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:35:17.780884   54676 cni.go:84] Creating CNI manager for ""
	I0729 11:35:17.780921   54676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:35:17.782822   54676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:35:17.784339   54676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:35:17.797167   54676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:35:17.818764   54676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:35:17.818924   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-184479 minikube.k8s.io/updated_at=2024_07_29T11_35_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=auto-184479 minikube.k8s.io/primary=true
	I0729 11:35:17.818926   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:17.851332   54676 ops.go:34] apiserver oom_adj: -16
	I0729 11:35:17.939350   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:18.440391   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:15.255703   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:15.256171   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:15.256236   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:15.256157   55451 retry.go:31] will retry after 3.226973911s: waiting for machine to come up
	I0729 11:35:18.484415   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:18.484961   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:18.484991   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:18.484888   55451 retry.go:31] will retry after 4.742962857s: waiting for machine to come up
	I0729 11:35:18.558919   54471 pod_ready.go:102] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"False"
	I0729 11:35:21.058030   54471 pod_ready.go:102] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"False"
	I0729 11:35:21.557883   54471 pod_ready.go:92] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:21.557904   54471 pod_ready.go:81] duration metric: took 9.507433418s for pod "etcd-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.557913   54471 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.562654   54471 pod_ready.go:92] pod "kube-apiserver-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:21.562671   54471 pod_ready.go:81] duration metric: took 4.751409ms for pod "kube-apiserver-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.562680   54471 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.567396   54471 pod_ready.go:92] pod "kube-controller-manager-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:21.567417   54471 pod_ready.go:81] duration metric: took 4.73005ms for pod "kube-controller-manager-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.567428   54471 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9c8zc" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.573533   54471 pod_ready.go:92] pod "kube-proxy-9c8zc" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:21.573553   54471 pod_ready.go:81] duration metric: took 6.117529ms for pod "kube-proxy-9c8zc" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.573565   54471 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.577880   54471 pod_ready.go:92] pod "kube-scheduler-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:21.577902   54471 pod_ready.go:81] duration metric: took 4.328068ms for pod "kube-scheduler-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.577908   54471 pod_ready.go:38] duration metric: took 12.044087293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:35:21.577922   54471 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:35:21.590432   54471 ops.go:34] apiserver oom_adj: -16
	I0729 11:35:21.590457   54471 kubeadm.go:597] duration metric: took 42.287441873s to restartPrimaryControlPlane
	I0729 11:35:21.590469   54471 kubeadm.go:394] duration metric: took 42.452792119s to StartCluster
	I0729 11:35:21.590489   54471 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:21.590578   54471 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:35:21.591246   54471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:21.591467   54471 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:35:21.591547   54471 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:35:21.591751   54471 config.go:182] Loaded profile config "pause-581851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:35:21.593428   54471 out.go:177] * Enabled addons: 
	I0729 11:35:21.593449   54471 out.go:177] * Verifying Kubernetes components...
	I0729 11:35:21.594798   54471 addons.go:510] duration metric: took 3.254769ms for enable addons: enabled=[]
	I0729 11:35:21.594916   54471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:35:21.765966   54471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:35:21.784069   54471 node_ready.go:35] waiting up to 6m0s for node "pause-581851" to be "Ready" ...
	I0729 11:35:21.786874   54471 node_ready.go:49] node "pause-581851" has status "Ready":"True"
	I0729 11:35:21.786900   54471 node_ready.go:38] duration metric: took 2.78954ms for node "pause-581851" to be "Ready" ...
	I0729 11:35:21.786910   54471 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:35:21.958767   54471 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:18.940153   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:19.439367   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:19.939376   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:20.439903   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:20.940375   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:21.439921   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:21.939567   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:22.439942   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:22.939954   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:23.439533   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:23.232174   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.232720   55120 main.go:141] libmachine: (kindnet-184479) Found IP for machine: 192.168.61.227
	I0729 11:35:23.232744   55120 main.go:141] libmachine: (kindnet-184479) Reserving static IP address...
	I0729 11:35:23.232754   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has current primary IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.233162   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find host DHCP lease matching {name: "kindnet-184479", mac: "52:54:00:99:79:ab", ip: "192.168.61.227"} in network mk-kindnet-184479
	I0729 11:35:23.311759   55120 main.go:141] libmachine: (kindnet-184479) DBG | Getting to WaitForSSH function...
	I0729 11:35:23.311790   55120 main.go:141] libmachine: (kindnet-184479) Reserved static IP address: 192.168.61.227
	I0729 11:35:23.311804   55120 main.go:141] libmachine: (kindnet-184479) Waiting for SSH to be available...
	I0729 11:35:23.314378   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.314776   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.314806   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.314968   55120 main.go:141] libmachine: (kindnet-184479) DBG | Using SSH client type: external
	I0729 11:35:23.314993   55120 main.go:141] libmachine: (kindnet-184479) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa (-rw-------)
	I0729 11:35:23.315042   55120 main.go:141] libmachine: (kindnet-184479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:35:23.315056   55120 main.go:141] libmachine: (kindnet-184479) DBG | About to run SSH command:
	I0729 11:35:23.315071   55120 main.go:141] libmachine: (kindnet-184479) DBG | exit 0
	I0729 11:35:23.443245   55120 main.go:141] libmachine: (kindnet-184479) DBG | SSH cmd err, output: <nil>: 
	I0729 11:35:23.443544   55120 main.go:141] libmachine: (kindnet-184479) KVM machine creation complete!
	I0729 11:35:23.443844   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetConfigRaw
	I0729 11:35:23.444438   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:35:23.444626   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:35:23.444778   55120 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:35:23.444792   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetState
	I0729 11:35:23.446126   55120 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:35:23.446144   55120 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:35:23.446152   55120 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:35:23.446159   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:23.448691   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.449142   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.449190   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.449314   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:23.449485   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.449653   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.449780   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:23.449994   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:23.450195   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:23.450208   55120 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:35:23.558411   55120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:35:23.558432   55120 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:35:23.558441   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:23.561527   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.561932   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.561993   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.562096   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:23.562309   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.562491   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.562623   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:23.562796   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:23.562965   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:23.562974   55120 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:35:23.667599   55120 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:35:23.667680   55120 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:35:23.667699   55120 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:35:23.667713   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetMachineName
	I0729 11:35:23.667962   55120 buildroot.go:166] provisioning hostname "kindnet-184479"
	I0729 11:35:23.667987   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetMachineName
	I0729 11:35:23.668161   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:23.671023   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.671485   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.671517   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.671707   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:23.671891   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.672164   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.672350   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:23.672535   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:23.672730   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:23.672749   55120 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-184479 && echo "kindnet-184479" | sudo tee /etc/hostname
	I0729 11:35:23.793440   55120 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-184479
	
	I0729 11:35:23.793467   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:23.796689   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.797234   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.797260   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.797455   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:23.797680   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.797892   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.798096   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:23.798272   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:23.798448   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:23.798463   55120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-184479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-184479/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-184479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:35:23.912548   55120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:35:23.912582   55120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:35:23.912630   55120 buildroot.go:174] setting up certificates
	I0729 11:35:23.912643   55120 provision.go:84] configureAuth start
	I0729 11:35:23.912660   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetMachineName
	I0729 11:35:23.912946   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetIP
	I0729 11:35:23.915791   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.916147   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.916174   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.916349   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:23.918500   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.918822   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.918846   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.919079   55120 provision.go:143] copyHostCerts
	I0729 11:35:23.919138   55120 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:35:23.919151   55120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:35:23.919220   55120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:35:23.919358   55120 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:35:23.919369   55120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:35:23.919401   55120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:35:23.919486   55120 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:35:23.919497   55120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:35:23.919522   55120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:35:23.919595   55120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.kindnet-184479 san=[127.0.0.1 192.168.61.227 kindnet-184479 localhost minikube]
	I0729 11:35:24.086399   55120 provision.go:177] copyRemoteCerts
	I0729 11:35:24.086449   55120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:35:24.086471   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:24.089646   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.090043   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.090073   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.090263   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:24.090503   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.090717   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:24.090871   55120 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa Username:docker}
	I0729 11:35:24.177525   55120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0729 11:35:24.204554   55120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:35:24.230747   55120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:35:24.257148   55120 provision.go:87] duration metric: took 344.486849ms to configureAuth
	I0729 11:35:24.257183   55120 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:35:24.257455   55120 config.go:182] Loaded profile config "kindnet-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:35:24.257533   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:24.260346   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.260679   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.260706   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.260925   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:24.261130   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.261334   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.261488   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:24.261654   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:24.261927   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:24.261949   55120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:35:24.808239   55365 start.go:364] duration metric: took 35.975039372s to acquireMachinesLock for "calico-184479"
	I0729 11:35:24.808321   55365 start.go:93] Provisioning new machine with config: &{Name:calico-184479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:calico-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:35:24.808474   55365 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 11:35:22.356291   54471 pod_ready.go:92] pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:22.356321   54471 pod_ready.go:81] duration metric: took 397.52382ms for pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:22.356334   54471 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:22.754759   54471 pod_ready.go:92] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:22.754783   54471 pod_ready.go:81] duration metric: took 398.442251ms for pod "etcd-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:22.754793   54471 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.154795   54471 pod_ready.go:92] pod "kube-apiserver-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:23.154818   54471 pod_ready.go:81] duration metric: took 400.019014ms for pod "kube-apiserver-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.154830   54471 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.554929   54471 pod_ready.go:92] pod "kube-controller-manager-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:23.554954   54471 pod_ready.go:81] duration metric: took 400.116953ms for pod "kube-controller-manager-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.554965   54471 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9c8zc" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.955191   54471 pod_ready.go:92] pod "kube-proxy-9c8zc" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:23.955219   54471 pod_ready.go:81] duration metric: took 400.247192ms for pod "kube-proxy-9c8zc" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.955233   54471 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:24.355613   54471 pod_ready.go:92] pod "kube-scheduler-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:24.355635   54471 pod_ready.go:81] duration metric: took 400.395216ms for pod "kube-scheduler-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:24.355642   54471 pod_ready.go:38] duration metric: took 2.568722382s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:35:24.355655   54471 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:35:24.355700   54471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:35:24.371312   54471 api_server.go:72] duration metric: took 2.779816745s to wait for apiserver process to appear ...
	I0729 11:35:24.371347   54471 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:35:24.371370   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:24.375969   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 11:35:24.376964   54471 api_server.go:141] control plane version: v1.30.3
	I0729 11:35:24.376983   54471 api_server.go:131] duration metric: took 5.628266ms to wait for apiserver health ...
	I0729 11:35:24.376993   54471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:35:24.558536   54471 system_pods.go:59] 6 kube-system pods found
	I0729 11:35:24.558566   54471 system_pods.go:61] "coredns-7db6d8ff4d-fmbbt" [ee1d727f-0f74-4d9d-b25c-f2a885c5d965] Running
	I0729 11:35:24.558572   54471 system_pods.go:61] "etcd-pause-581851" [813e6429-8327-4b00-9b30-a3cf17beb72c] Running
	I0729 11:35:24.558575   54471 system_pods.go:61] "kube-apiserver-pause-581851" [b6c94874-ccbf-4169-9f8b-504b0e97c887] Running
	I0729 11:35:24.558579   54471 system_pods.go:61] "kube-controller-manager-pause-581851" [2a916bb9-ea90-47d1-9343-2774c1d2f74c] Running
	I0729 11:35:24.558582   54471 system_pods.go:61] "kube-proxy-9c8zc" [a92ad38f-257a-4364-a81c-2b6bfcb3150c] Running
	I0729 11:35:24.558585   54471 system_pods.go:61] "kube-scheduler-pause-581851" [b37b3ec1-e38b-4e63-a41e-ab904d9d1246] Running
	I0729 11:35:24.558590   54471 system_pods.go:74] duration metric: took 181.592074ms to wait for pod list to return data ...
	I0729 11:35:24.558599   54471 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:35:24.754913   54471 default_sa.go:45] found service account: "default"
	I0729 11:35:24.754946   54471 default_sa.go:55] duration metric: took 196.341673ms for default service account to be created ...
	I0729 11:35:24.754958   54471 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:35:24.960193   54471 system_pods.go:86] 6 kube-system pods found
	I0729 11:35:24.960225   54471 system_pods.go:89] "coredns-7db6d8ff4d-fmbbt" [ee1d727f-0f74-4d9d-b25c-f2a885c5d965] Running
	I0729 11:35:24.960232   54471 system_pods.go:89] "etcd-pause-581851" [813e6429-8327-4b00-9b30-a3cf17beb72c] Running
	I0729 11:35:24.960245   54471 system_pods.go:89] "kube-apiserver-pause-581851" [b6c94874-ccbf-4169-9f8b-504b0e97c887] Running
	I0729 11:35:24.960251   54471 system_pods.go:89] "kube-controller-manager-pause-581851" [2a916bb9-ea90-47d1-9343-2774c1d2f74c] Running
	I0729 11:35:24.960258   54471 system_pods.go:89] "kube-proxy-9c8zc" [a92ad38f-257a-4364-a81c-2b6bfcb3150c] Running
	I0729 11:35:24.960263   54471 system_pods.go:89] "kube-scheduler-pause-581851" [b37b3ec1-e38b-4e63-a41e-ab904d9d1246] Running
	I0729 11:35:24.960281   54471 system_pods.go:126] duration metric: took 205.307612ms to wait for k8s-apps to be running ...
	I0729 11:35:24.960294   54471 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:35:24.960343   54471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:35:24.980671   54471 system_svc.go:56] duration metric: took 20.366233ms WaitForService to wait for kubelet
	I0729 11:35:24.980705   54471 kubeadm.go:582] duration metric: took 3.389213924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:35:24.980729   54471 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:35:25.156238   54471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:35:25.156267   54471 node_conditions.go:123] node cpu capacity is 2
	I0729 11:35:25.156277   54471 node_conditions.go:105] duration metric: took 175.54223ms to run NodePressure ...
	I0729 11:35:25.156293   54471 start.go:241] waiting for startup goroutines ...
	I0729 11:35:25.156305   54471 start.go:246] waiting for cluster config update ...
	I0729 11:35:25.156315   54471 start.go:255] writing updated cluster config ...
	I0729 11:35:25.156649   54471 ssh_runner.go:195] Run: rm -f paused
	I0729 11:35:25.209606   54471 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:35:25.211704   54471 out.go:177] * Done! kubectl is now configured to use "pause-581851" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 11:35:25 pause-581851 crio[2235]: time="2024-07-29 11:35:25.996099598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252925996079009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1044ee00-1ea2-4e7e-ac8e-88c0732641be name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:25 pause-581851 crio[2235]: time="2024-07-29 11:35:25.996820925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab6799f5-d1f4-4c62-aba5-67afe99a2950 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:25 pause-581851 crio[2235]: time="2024-07-29 11:35:25.996905422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab6799f5-d1f4-4c62-aba5-67afe99a2950 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:25 pause-581851 crio[2235]: time="2024-07-29 11:35:25.997234944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce2191b91903c69dedc50e6b5526b98bc889f9f77dab55ed00a597f36b12d2d3,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722252907160453316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537aeb34b9b99c314d7326619460bd413833587890882a5284bf3f5ebf4d66f,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252907161483767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6ce9985e3e97a93b57d7168d5c4cd801ec8d175b84f4a8f872af5459adfd94,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252902072079926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f6fc6d1c26b6e29b83122105a8d1674c8f96551e9f8a230513541c590a88cb,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252902093327872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca187fb69109b1cdc68ed618e48b7e159e6a080ee12a804a912ac013a94583,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252902099342266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bc2aa0533bf5dfce0e67f3ce4e949df70cad348d6881c8371fa839c5964139,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252902060382068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io
.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252878067236756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314
e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252877525849539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252877397830597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252877323157001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annotations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722252877268209557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722252877203477568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab6799f5-d1f4-4c62-aba5-67afe99a2950 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.054442049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3076def-20da-4c5e-b9ec-f2bb8eaa371a name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.054608720Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3076def-20da-4c5e-b9ec-f2bb8eaa371a name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.056265875Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1d0fcac-f4d5-45ba-9f22-248ba2f8c607 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.056690525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252926056665831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1d0fcac-f4d5-45ba-9f22-248ba2f8c607 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.057416348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab1a44bb-c60a-4ec3-ab1a-5c93d1663ee4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.057467975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab1a44bb-c60a-4ec3-ab1a-5c93d1663ee4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.057790992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce2191b91903c69dedc50e6b5526b98bc889f9f77dab55ed00a597f36b12d2d3,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722252907160453316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537aeb34b9b99c314d7326619460bd413833587890882a5284bf3f5ebf4d66f,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252907161483767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6ce9985e3e97a93b57d7168d5c4cd801ec8d175b84f4a8f872af5459adfd94,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252902072079926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f6fc6d1c26b6e29b83122105a8d1674c8f96551e9f8a230513541c590a88cb,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252902093327872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca187fb69109b1cdc68ed618e48b7e159e6a080ee12a804a912ac013a94583,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252902099342266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bc2aa0533bf5dfce0e67f3ce4e949df70cad348d6881c8371fa839c5964139,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252902060382068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io
.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252878067236756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314
e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252877525849539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252877397830597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252877323157001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annotations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722252877268209557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722252877203477568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab1a44bb-c60a-4ec3-ab1a-5c93d1663ee4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.117958928Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee2e2ab6-af53-4304-b49f-8c24ed6d8f26 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.118062324Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee2e2ab6-af53-4304-b49f-8c24ed6d8f26 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.119978782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ef9c753-c2b3-437a-aa5a-a742666e4e3f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.120498657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252926120464347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ef9c753-c2b3-437a-aa5a-a742666e4e3f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.121287798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=035e18ac-ba6b-4941-8f12-efc69bbaf521 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.121364335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=035e18ac-ba6b-4941-8f12-efc69bbaf521 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.121944024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce2191b91903c69dedc50e6b5526b98bc889f9f77dab55ed00a597f36b12d2d3,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722252907160453316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537aeb34b9b99c314d7326619460bd413833587890882a5284bf3f5ebf4d66f,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252907161483767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6ce9985e3e97a93b57d7168d5c4cd801ec8d175b84f4a8f872af5459adfd94,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252902072079926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f6fc6d1c26b6e29b83122105a8d1674c8f96551e9f8a230513541c590a88cb,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252902093327872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca187fb69109b1cdc68ed618e48b7e159e6a080ee12a804a912ac013a94583,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252902099342266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bc2aa0533bf5dfce0e67f3ce4e949df70cad348d6881c8371fa839c5964139,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252902060382068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io
.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252878067236756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314
e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252877525849539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252877397830597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252877323157001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annotations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722252877268209557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722252877203477568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=035e18ac-ba6b-4941-8f12-efc69bbaf521 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.173927213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=415387cf-93d3-4154-a51b-f5db1c5cf481 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.174026471Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=415387cf-93d3-4154-a51b-f5db1c5cf481 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.175107879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5689583b-2938-4f1c-a743-45f6d1b801e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.175794758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252926175761212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5689583b-2938-4f1c-a743-45f6d1b801e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.176570650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dce69605-a0a9-420b-9c26-77b2fb664388 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.176644654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dce69605-a0a9-420b-9c26-77b2fb664388 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:26 pause-581851 crio[2235]: time="2024-07-29 11:35:26.176949779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce2191b91903c69dedc50e6b5526b98bc889f9f77dab55ed00a597f36b12d2d3,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722252907160453316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537aeb34b9b99c314d7326619460bd413833587890882a5284bf3f5ebf4d66f,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252907161483767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6ce9985e3e97a93b57d7168d5c4cd801ec8d175b84f4a8f872af5459adfd94,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252902072079926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f6fc6d1c26b6e29b83122105a8d1674c8f96551e9f8a230513541c590a88cb,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252902093327872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca187fb69109b1cdc68ed618e48b7e159e6a080ee12a804a912ac013a94583,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252902099342266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bc2aa0533bf5dfce0e67f3ce4e949df70cad348d6881c8371fa839c5964139,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252902060382068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io
.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252878067236756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314
e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252877525849539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252877397830597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252877323157001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annotations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722252877268209557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722252877203477568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dce69605-a0a9-420b-9c26-77b2fb664388 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c537aeb34b9b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   426cac2b8b210       coredns-7db6d8ff4d-fmbbt
	ce2191b91903c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 seconds ago      Running             kube-proxy                2                   eed8fa4247f11       kube-proxy-9c8zc
	37ca187fb6910       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   24 seconds ago      Running             kube-controller-manager   2                   55eaea7e34236       kube-controller-manager-pause-581851
	e2f6fc6d1c26b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   24 seconds ago      Running             kube-scheduler            2                   19749966fb3ee       kube-scheduler-pause-581851
	fe6ce9985e3e9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   82a89953e42f6       etcd-pause-581851
	06bc2aa0533bf       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   24 seconds ago      Running             kube-apiserver            2                   1271f1fee93b3       kube-apiserver-pause-581851
	269528986b69c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   48 seconds ago      Exited              coredns                   1                   426cac2b8b210       coredns-7db6d8ff4d-fmbbt
	fac8d87679859       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   48 seconds ago      Exited              kube-proxy                1                   eed8fa4247f11       kube-proxy-9c8zc
	1758162276bc2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   48 seconds ago      Exited              kube-scheduler            1                   19749966fb3ee       kube-scheduler-pause-581851
	02ab66aeb0736       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   48 seconds ago      Exited              etcd                      1                   82a89953e42f6       etcd-pause-581851
	ee61ceda3fe5c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   49 seconds ago      Exited              kube-controller-manager   1                   55eaea7e34236       kube-controller-manager-pause-581851
	ef298c1e5d0dc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   49 seconds ago      Exited              kube-apiserver            1                   1271f1fee93b3       kube-apiserver-pause-581851
	
	
	==> coredns [269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:38174 - 35477 "HINFO IN 1349426607110657866.891358503850990292. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014988721s
	
	
	==> coredns [c537aeb34b9b99c314d7326619460bd413833587890882a5284bf3f5ebf4d66f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60352 - 54783 "HINFO IN 2687006416712034348.3441999991459677384. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014613208s
	
	
	==> describe nodes <==
	Name:               pause-581851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-581851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=pause-581851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_33_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:33:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-581851
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:35:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:35:05 +0000   Mon, 29 Jul 2024 11:33:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:35:05 +0000   Mon, 29 Jul 2024 11:33:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:35:05 +0000   Mon, 29 Jul 2024 11:33:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:35:05 +0000   Mon, 29 Jul 2024 11:33:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.53
	  Hostname:    pause-581851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8f7a2737fab44e58754172c3a269877
	  System UUID:                a8f7a273-7fab-44e5-8754-172c3a269877
	  Boot ID:                    3a11576d-2857-48f4-bd18-4770b96a6083
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fmbbt                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     89s
	  kube-system                 etcd-pause-581851                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 kube-apiserver-pause-581851             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-controller-manager-pause-581851    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-9c8zc                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-pause-581851             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 86s                  kube-proxy       
	  Normal  Starting                 18s                  kube-proxy       
	  Normal  Starting                 44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  109s (x8 over 109s)  kubelet          Node pause-581851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x8 over 109s)  kubelet          Node pause-581851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x7 over 109s)  kubelet          Node pause-581851 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node pause-581851 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node pause-581851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node pause-581851 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeReady                101s                 kubelet          Node pause-581851 status is now: NodeReady
	  Normal  RegisteredNode           90s                  node-controller  Node pause-581851 event: Registered Node pause-581851 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)    kubelet          Node pause-581851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)    kubelet          Node pause-581851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)    kubelet          Node pause-581851 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                   node-controller  Node pause-581851 event: Registered Node pause-581851 in Controller
	
	
	==> dmesg <==
	[  +0.056260] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.085307] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.176103] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.149418] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.283712] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.272741] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.059164] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.741518] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.637339] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.964348] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.119461] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.946958] systemd-fstab-generator[1504]: Ignoring "noauto" option for root device
	[  +0.102345] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 11:34] systemd-fstab-generator[2152]: Ignoring "noauto" option for root device
	[  +0.089001] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.072333] systemd-fstab-generator[2164]: Ignoring "noauto" option for root device
	[  +0.193055] systemd-fstab-generator[2178]: Ignoring "noauto" option for root device
	[  +0.182990] systemd-fstab-generator[2190]: Ignoring "noauto" option for root device
	[  +0.312822] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +3.058195] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +3.848860] kauditd_printk_skb: 195 callbacks suppressed
	[Jul29 11:35] systemd-fstab-generator[3239]: Ignoring "noauto" option for root device
	[  +5.938023] kauditd_printk_skb: 39 callbacks suppressed
	[ +13.126687] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.306316] systemd-fstab-generator[3671]: Ignoring "noauto" option for root device
	
	
	==> etcd [02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2] <==
	{"level":"info","ts":"2024-07-29T11:34:38.280005Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"40a79b093a7e4780","initial-advertise-peer-urls":["https://192.168.50.53:2380"],"listen-peer-urls":["https://192.168.50.53:2380"],"advertise-client-urls":["https://192.168.50.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:34:39.885806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T11:34:39.885877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:34:39.885936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 received MsgPreVoteResp from 40a79b093a7e4780 at term 2"}
	{"level":"info","ts":"2024-07-29T11:34:39.885968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:39.885978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 received MsgVoteResp from 40a79b093a7e4780 at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:39.886014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:39.886029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 40a79b093a7e4780 elected leader 40a79b093a7e4780 at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:39.887835Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"40a79b093a7e4780","local-member-attributes":"{Name:pause-581851 ClientURLs:[https://192.168.50.53:2379]}","request-path":"/0/members/40a79b093a7e4780/attributes","cluster-id":"1150b84c67dfd974","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:34:39.887909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:34:39.888511Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:34:39.89124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.53:2379"}
	{"level":"info","ts":"2024-07-29T11:34:39.895256Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T11:34:39.915626Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:34:39.915693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:34:49.083581Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T11:34:49.08375Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-581851","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.53:2380"],"advertise-client-urls":["https://192.168.50.53:2379"]}
	{"level":"warn","ts":"2024-07-29T11:34:49.083825Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:34:49.083924Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:34:49.096001Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.53:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:34:49.096059Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.53:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T11:34:49.098777Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"40a79b093a7e4780","current-leader-member-id":"40a79b093a7e4780"}
	{"level":"info","ts":"2024-07-29T11:34:49.10265Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.53:2380"}
	{"level":"info","ts":"2024-07-29T11:34:49.102778Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.53:2380"}
	{"level":"info","ts":"2024-07-29T11:34:49.102787Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-581851","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.53:2380"],"advertise-client-urls":["https://192.168.50.53:2379"]}
	
	
	==> etcd [fe6ce9985e3e97a93b57d7168d5c4cd801ec8d175b84f4a8f872af5459adfd94] <==
	{"level":"warn","ts":"2024-07-29T11:35:08.455736Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:07.552726Z","time spent":"902.905664ms","remote":"127.0.0.1:46982","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6581,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-581851\" mod_revision:448 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-581851\" value_size:6510 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-581851\" > >"}
	{"level":"warn","ts":"2024-07-29T11:35:08.862793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.609758ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5152277395478209568 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" mod_revision:483 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" value_size:4483 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T11:35:08.863033Z","caller":"traceutil/trace.go:171","msg":"trace[162885434] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:518; }","duration":"337.552997ms","start":"2024-07-29T11:35:08.525468Z","end":"2024-07-29T11:35:08.863021Z","steps":["trace[162885434] 'read index received'  (duration: 149.596872ms)","trace[162885434] 'applied index is now lower than readState.Index'  (duration: 187.955375ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T11:35:08.86325Z","caller":"traceutil/trace.go:171","msg":"trace[1192231471] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"383.274429ms","start":"2024-07-29T11:35:08.479966Z","end":"2024-07-29T11:35:08.863241Z","steps":["trace[1192231471] 'process raft request'  (duration: 195.162699ms)","trace[1192231471] 'compare'  (duration: 187.540045ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:35:08.863307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:08.47995Z","time spent":"383.323083ms","remote":"127.0.0.1:46982","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4545,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" mod_revision:483 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" value_size:4483 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" > >"}
	{"level":"warn","ts":"2024-07-29T11:35:08.863427Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:08.523047Z","time spent":"340.378001ms","remote":"127.0.0.1:47312","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-07-29T11:35:08.863526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.482053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5426"}
	{"level":"info","ts":"2024-07-29T11:35:08.86361Z","caller":"traceutil/trace.go:171","msg":"trace[1891426879] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:493; }","duration":"384.573272ms","start":"2024-07-29T11:35:08.479028Z","end":"2024-07-29T11:35:08.863602Z","steps":["trace[1891426879] 'agreement among raft nodes before linearized reading'  (duration: 384.469369ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:35:08.86363Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:08.479022Z","time spent":"384.602324ms","remote":"127.0.0.1:46978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":5448,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2024-07-29T11:35:08.863764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.394286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-07-29T11:35:08.86378Z","caller":"traceutil/trace.go:171","msg":"trace[322937798] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:493; }","duration":"334.427289ms","start":"2024-07-29T11:35:08.529347Z","end":"2024-07-29T11:35:08.863775Z","steps":["trace[322937798] 'agreement among raft nodes before linearized reading'  (duration: 334.391477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:35:08.863793Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:08.529339Z","time spent":"334.451722ms","remote":"127.0.0.1:47000","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":1,"response size":238,"request content":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" "}
	{"level":"warn","ts":"2024-07-29T11:35:08.863981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.573134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:421"}
	{"level":"info","ts":"2024-07-29T11:35:08.864017Z","caller":"traceutil/trace.go:171","msg":"trace[1592594679] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:493; }","duration":"336.02974ms","start":"2024-07-29T11:35:08.527982Z","end":"2024-07-29T11:35:08.864011Z","steps":["trace[1592594679] 'agreement among raft nodes before linearized reading'  (duration: 335.977721ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:35:08.864035Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:08.52797Z","time spent":"336.06041ms","remote":"127.0.0.1:46976","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":443,"request content":"key:\"/registry/services/endpoints/default/kubernetes\" "}
	{"level":"warn","ts":"2024-07-29T11:35:09.165839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.295125ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5152277395478209574 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-581851.17e6abe539855582\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-581851.17e6abe539855582\" value_size:462 lease:5152277395478209569 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T11:35:09.166397Z","caller":"traceutil/trace.go:171","msg":"trace[1379867667] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"298.162838ms","start":"2024-07-29T11:35:08.868215Z","end":"2024-07-29T11:35:09.166378Z","steps":["trace[1379867667] 'process raft request'  (duration: 161.258982ms)","trace[1379867667] 'compare'  (duration: 136.177654ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T11:35:09.166734Z","caller":"traceutil/trace.go:171","msg":"trace[497208888] linearizableReadLoop","detail":"{readStateIndex:521; appliedIndex:520; }","duration":"296.488462ms","start":"2024-07-29T11:35:08.870237Z","end":"2024-07-29T11:35:09.166725Z","steps":["trace[497208888] 'read index received'  (duration: 159.244913ms)","trace[497208888] 'applied index is now lower than readState.Index'  (duration: 137.242799ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:35:09.166892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.64501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-07-29T11:35:09.166934Z","caller":"traceutil/trace.go:171","msg":"trace[860515993] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:494; }","duration":"296.71622ms","start":"2024-07-29T11:35:08.87021Z","end":"2024-07-29T11:35:09.166926Z","steps":["trace[860515993] 'agreement among raft nodes before linearized reading'  (duration: 296.6234ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:35:09.167073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.678552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-fmbbt\" ","response":"range_response_count:1 size:5121"}
	{"level":"info","ts":"2024-07-29T11:35:09.167111Z","caller":"traceutil/trace.go:171","msg":"trace[46604132] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-fmbbt; range_end:; response_count:1; response_revision:494; }","duration":"296.726529ms","start":"2024-07-29T11:35:08.870379Z","end":"2024-07-29T11:35:09.167105Z","steps":["trace[46604132] 'agreement among raft nodes before linearized reading'  (duration: 296.66571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:35:09.167228Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.806761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-29T11:35:09.167305Z","caller":"traceutil/trace.go:171","msg":"trace[902931312] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:494; }","duration":"296.843058ms","start":"2024-07-29T11:35:08.870415Z","end":"2024-07-29T11:35:09.167258Z","steps":["trace[902931312] 'agreement among raft nodes before linearized reading'  (duration: 296.791413ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:35:09.167862Z","caller":"traceutil/trace.go:171","msg":"trace[1158866614] transaction","detail":"{read_only:false; number_of_response:0; response_revision:494; }","duration":"219.401562ms","start":"2024-07-29T11:35:08.94845Z","end":"2024-07-29T11:35:09.167852Z","steps":["trace[1158866614] 'process raft request'  (duration: 217.909951ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:35:26 up 2 min,  0 users,  load average: 1.12, 0.48, 0.18
	Linux pause-581851 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [06bc2aa0533bf5dfce0e67f3ce4e949df70cad348d6881c8371fa839c5964139] <==
	I0729 11:35:08.460645       1 trace.go:236] Trace[1018194497]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:761c23f2-b932-4a8f-a840-01cc8282b8b9,client:192.168.50.53,api-group:,api-version:v1,name:kube-controller-manager-pause-581851,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-581851/status,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PATCH (29-Jul-2024 11:35:07.546) (total time: 914ms):
	Trace[1018194497]: ["GuaranteedUpdate etcd3" audit-id:761c23f2-b932-4a8f-a840-01cc8282b8b9,key:/pods/kube-system/kube-controller-manager-pause-581851,type:*core.Pod,resource:pods 913ms (11:35:07.546)
	Trace[1018194497]:  ---"Txn call completed" 904ms (11:35:08.456)]
	Trace[1018194497]: ---"Object stored in database" 905ms (11:35:08.456)
	Trace[1018194497]: [914.113614ms] [914.113614ms] END
	I0729 11:35:08.469148       1 trace.go:236] Trace[1616495856]: "List" accept:application/json, */*,audit-id:9b0ec97f-e991-48fe-b7ce-85b0025ea281,client:192.168.50.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (29-Jul-2024 11:35:07.611) (total time: 854ms):
	Trace[1616495856]: ["List(recursive=true) etcd3" audit-id:9b0ec97f-e991-48fe-b7ce-85b0025ea281,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 858ms (11:35:07.611)]
	Trace[1616495856]: [854.485053ms] [854.485053ms] END
	I0729 11:35:08.526147       1 trace.go:236] Trace[1119242437]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.53,type:*v1.Endpoints,resource:apiServerIPInfo (29-Jul-2024 11:35:07.017) (total time: 1508ms):
	Trace[1119242437]: ---"initial value restored" 518ms (11:35:07.535)
	Trace[1119242437]: ---"Transaction prepared" 924ms (11:35:08.460)
	Trace[1119242437]: ---"Txn call completed" 65ms (11:35:08.526)
	Trace[1119242437]: [1.50897492s] [1.50897492s] END
	I0729 11:35:09.167360       1 trace.go:236] Trace[1733212293]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:353b13c6-a37b-41c2-91b6-e602cedf4ed6,client:192.168.50.53,api-group:events.k8s.io,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:events,scope:resource,url:/apis/events.k8s.io/v1/namespaces/default/events,user-agent:kube-proxy/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 11:35:08.521) (total time: 645ms):
	Trace[1733212293]: ["Create etcd3" audit-id:353b13c6-a37b-41c2-91b6-e602cedf4ed6,key:/events/default/pause-581851.17e6abe539855582,type:*core.Event,resource:events 644ms (11:35:08.522)
	Trace[1733212293]:  ---"TransformToStorage succeeded" 342ms (11:35:08.864)
	Trace[1733212293]:  ---"Txn call succeeded" 302ms (11:35:09.166)]
	Trace[1733212293]: [645.580068ms] [645.580068ms] END
	I0729 11:35:09.278375       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 11:35:09.320068       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 11:35:09.420158       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 11:35:09.490514       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 11:35:09.503298       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 11:35:20.311288       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:35:20.324398       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5] <==
	W0729 11:34:58.235038       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.352055       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.399489       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.437696       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.453513       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.489848       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.529186       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.570169       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.611006       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.617058       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.629459       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.734667       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.736094       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.769661       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.854949       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.894673       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.918103       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.962733       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.023851       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.060842       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.092807       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.098623       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.165861       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.231841       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.244097       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [37ca187fb69109b1cdc68ed618e48b7e159e6a080ee12a804a912ac013a94583] <==
	I0729 11:35:20.316083       1 shared_informer.go:320] Caches are synced for HPA
	I0729 11:35:20.317259       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 11:35:20.327806       1 shared_informer.go:320] Caches are synced for job
	I0729 11:35:20.329612       1 shared_informer.go:320] Caches are synced for taint
	I0729 11:35:20.329895       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 11:35:20.330034       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-581851"
	I0729 11:35:20.330104       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 11:35:20.331725       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 11:35:20.331787       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 11:35:20.334137       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 11:35:20.340635       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 11:35:20.340846       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="148.928µs"
	I0729 11:35:20.344977       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 11:35:20.345426       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 11:35:20.349730       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 11:35:20.364233       1 shared_informer.go:320] Caches are synced for deployment
	I0729 11:35:20.371802       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 11:35:20.375620       1 shared_informer.go:320] Caches are synced for disruption
	I0729 11:35:20.377962       1 shared_informer.go:320] Caches are synced for GC
	I0729 11:35:20.382618       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 11:35:20.389277       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 11:35:20.394014       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 11:35:20.774181       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 11:35:20.778620       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 11:35:20.778662       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc] <==
	I0729 11:34:43.716710       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0729 11:34:43.716946       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0729 11:34:43.717000       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0729 11:34:43.717146       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0729 11:34:43.718954       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0729 11:34:43.719428       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0729 11:34:43.719876       1 shared_informer.go:313] Waiting for caches to sync for job
	I0729 11:34:43.721914       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0729 11:34:43.722020       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0729 11:34:43.722125       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 11:34:43.722612       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0729 11:34:43.722781       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0729 11:34:43.722811       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0729 11:34:43.722838       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0729 11:34:43.722844       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0729 11:34:43.722872       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0729 11:34:43.722877       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0729 11:34:43.722888       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 11:34:43.722956       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 11:34:43.723019       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 11:34:43.728643       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0729 11:34:43.728676       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0729 11:34:43.728702       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0729 11:34:43.729083       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0729 11:34:43.758863       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-proxy [ce2191b91903c69dedc50e6b5526b98bc889f9f77dab55ed00a597f36b12d2d3] <==
	I0729 11:35:07.664174       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:35:08.457929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.53"]
	I0729 11:35:08.510931       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:35:08.510976       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:35:08.510992       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:35:08.515340       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:35:08.515678       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:35:08.515757       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:35:08.517035       1 config.go:192] "Starting service config controller"
	I0729 11:35:08.517124       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:35:08.517181       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:35:08.517199       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:35:08.517764       1 config.go:319] "Starting node config controller"
	I0729 11:35:08.518964       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:35:08.617237       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:35:08.617356       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:35:08.619321       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606] <==
	I0729 11:34:39.603750       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:34:41.721256       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.53"]
	I0729 11:34:41.824649       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:34:41.824732       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:34:41.824754       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:34:41.865067       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:34:41.870421       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:34:41.883666       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:34:41.908633       1 config.go:192] "Starting service config controller"
	I0729 11:34:41.908685       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:34:41.908720       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:34:41.908726       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:34:41.909348       1 config.go:319] "Starting node config controller"
	I0729 11:34:41.909387       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:34:42.009784       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:34:42.010195       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:34:42.010527       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8] <==
	I0729 11:34:39.647281       1 serving.go:380] Generated self-signed cert in-memory
	W0729 11:34:41.654307       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 11:34:41.655186       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:34:41.655257       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 11:34:41.655292       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 11:34:41.707057       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 11:34:41.707966       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:34:41.710685       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 11:34:41.711037       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 11:34:41.711108       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:34:41.711157       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 11:34:41.811710       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:34:48.883266       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 11:34:48.883429       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 11:34:48.883510       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 11:34:48.884085       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e2f6fc6d1c26b6e29b83122105a8d1674c8f96551e9f8a230513541c590a88cb] <==
	I0729 11:35:03.865705       1 serving.go:380] Generated self-signed cert in-memory
	W0729 11:35:05.435825       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 11:35:05.435917       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:35:05.435927       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 11:35:05.435934       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 11:35:05.490699       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 11:35:05.490809       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:35:05.494379       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 11:35:05.494431       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:35:05.494999       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 11:35:05.495224       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 11:35:05.595178       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 11:35:02 pause-581851 kubelet[3246]: I0729 11:35:02.032853    3246 scope.go:117] "RemoveContainer" containerID="ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: I0729 11:35:02.037099    3246 scope.go:117] "RemoveContainer" containerID="ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: I0729 11:35:02.039147    3246 scope.go:117] "RemoveContainer" containerID="1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: E0729 11:35:02.155897    3246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-581851?timeout=10s\": dial tcp 192.168.50.53:8443: connect: connection refused" interval="800ms"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: I0729 11:35:02.253391    3246 kubelet_node_status.go:73] "Attempting to register node" node="pause-581851"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: E0729 11:35:02.254986    3246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.53:8443: connect: connection refused" node="pause-581851"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: W0729 11:35:02.537885    3246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-581851&limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Jul 29 11:35:02 pause-581851 kubelet[3246]: E0729 11:35:02.538109    3246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-581851&limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Jul 29 11:35:02 pause-581851 kubelet[3246]: W0729 11:35:02.544315    3246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Jul 29 11:35:02 pause-581851 kubelet[3246]: E0729 11:35:02.544438    3246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Jul 29 11:35:03 pause-581851 kubelet[3246]: I0729 11:35:03.057344    3246 kubelet_node_status.go:73] "Attempting to register node" node="pause-581851"
	Jul 29 11:35:05 pause-581851 kubelet[3246]: I0729 11:35:05.628343    3246 kubelet_node_status.go:112] "Node was previously registered" node="pause-581851"
	Jul 29 11:35:05 pause-581851 kubelet[3246]: I0729 11:35:05.628808    3246 kubelet_node_status.go:76] "Successfully registered node" node="pause-581851"
	Jul 29 11:35:05 pause-581851 kubelet[3246]: I0729 11:35:05.630307    3246 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 11:35:05 pause-581851 kubelet[3246]: I0729 11:35:05.631402    3246 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 11:35:05 pause-581851 kubelet[3246]: E0729 11:35:05.784602    3246 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-581851\" already exists" pod="kube-system/kube-apiserver-pause-581851"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.524167    3246 apiserver.go:52] "Watching apiserver"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.527200    3246 topology_manager.go:215] "Topology Admit Handler" podUID="ee1d727f-0f74-4d9d-b25c-f2a885c5d965" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fmbbt"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.528779    3246 topology_manager.go:215] "Topology Admit Handler" podUID="a92ad38f-257a-4364-a81c-2b6bfcb3150c" podNamespace="kube-system" podName="kube-proxy-9c8zc"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.543283    3246 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.641050    3246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a92ad38f-257a-4364-a81c-2b6bfcb3150c-lib-modules\") pod \"kube-proxy-9c8zc\" (UID: \"a92ad38f-257a-4364-a81c-2b6bfcb3150c\") " pod="kube-system/kube-proxy-9c8zc"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.641640    3246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a92ad38f-257a-4364-a81c-2b6bfcb3150c-xtables-lock\") pod \"kube-proxy-9c8zc\" (UID: \"a92ad38f-257a-4364-a81c-2b6bfcb3150c\") " pod="kube-system/kube-proxy-9c8zc"
	Jul 29 11:35:07 pause-581851 kubelet[3246]: I0729 11:35:07.130842    3246 scope.go:117] "RemoveContainer" containerID="269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6"
	Jul 29 11:35:07 pause-581851 kubelet[3246]: I0729 11:35:07.131273    3246 scope.go:117] "RemoveContainer" containerID="fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606"
	Jul 29 11:35:11 pause-581851 kubelet[3246]: I0729 11:35:11.564089    3246 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-581851 -n pause-581851
helpers_test.go:261: (dbg) Run:  kubectl --context pause-581851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-581851 -n pause-581851
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-581851 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-581851 logs -n 25: (1.801272157s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:31 UTC | 29 Jul 24 11:32 UTC |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p running-upgrade-342576           | running-upgrade-342576    | jenkins | v1.33.1 | 29 Jul 24 11:31 UTC | 29 Jul 24 11:32 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	| start   | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:33 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-941459 sudo         | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-342576           | running-upgrade-342576    | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:32 UTC |
	| start   | -p pause-581851 --memory=2048       | pause-581851              | jenkins | v1.33.1 | 29 Jul 24 11:32 UTC | 29 Jul 24 11:34 UTC |
	|         | --install-addons=false              |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2            |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:33 UTC |
	| start   | -p cert-expiration-338366           | cert-expiration-338366    | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:34 UTC |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h             |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:33 UTC |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC |                     |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:34 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-941459 sudo         | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-941459              | NoKubernetes-941459       | jenkins | v1.33.1 | 29 Jul 24 11:33 UTC | 29 Jul 24 11:33 UTC |
	| start   | -p force-systemd-env-802488         | force-systemd-env-802488  | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p pause-581851                     | pause-581851              | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:35 UTC |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-338366           | cert-expiration-338366    | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	| start   | -p auto-184479 --memory=3072        | auto-184479               | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --alsologtostderr --wait=true       |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302301        | kubernetes-upgrade-302301 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	| start   | -p kindnet-184479                   | kindnet-184479            | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --memory=3072                       |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true       |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                  |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-802488         | force-systemd-env-802488  | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	| start   | -p calico-184479 --memory=3072      | calico-184479             | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --alsologtostderr --wait=true       |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                  |                           |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:34:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:34:48.756383   55365 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:34:48.756477   55365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:34:48.756485   55365 out.go:304] Setting ErrFile to fd 2...
	I0729 11:34:48.756489   55365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:34:48.756681   55365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:34:48.757255   55365 out.go:298] Setting JSON to false
	I0729 11:34:48.758150   55365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4635,"bootTime":1722248254,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:34:48.758204   55365 start.go:139] virtualization: kvm guest
	I0729 11:34:48.760289   55365 out.go:177] * [calico-184479] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:34:48.762411   55365 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:34:48.762434   55365 notify.go:220] Checking for updates...
	I0729 11:34:48.765252   55365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:34:48.766758   55365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:34:48.768275   55365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:34:48.769746   55365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:34:48.771310   55365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:34:48.773262   55365 config.go:182] Loaded profile config "auto-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:48.773447   55365 config.go:182] Loaded profile config "kindnet-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:48.773633   55365 config.go:182] Loaded profile config "pause-581851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:48.773790   55365 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:34:48.811123   55365 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 11:34:48.812711   55365 start.go:297] selected driver: kvm2
	I0729 11:34:48.812727   55365 start.go:901] validating driver "kvm2" against <nil>
	I0729 11:34:48.812737   55365 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:34:48.813474   55365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:34:48.813554   55365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:34:48.828871   55365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:34:48.828923   55365 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:34:48.829143   55365 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:34:48.829165   55365 cni.go:84] Creating CNI manager for "calico"
	I0729 11:34:48.829173   55365 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 11:34:48.829219   55365 start.go:340] cluster config:
	{Name:calico-184479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:34:48.829342   55365 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:34:48.831279   55365 out.go:177] * Starting "calico-184479" primary control-plane node in "calico-184479" cluster
	I0729 11:34:49.953786   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:49.954396   54676 main.go:141] libmachine: (auto-184479) DBG | unable to find current IP address of domain auto-184479 in network mk-auto-184479
	I0729 11:34:49.954418   54676 main.go:141] libmachine: (auto-184479) DBG | I0729 11:34:49.954340   55076 retry.go:31] will retry after 4.394100822s: waiting for machine to come up
	I0729 11:34:48.832759   55365 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:34:48.832793   55365 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 11:34:48.832803   55365 cache.go:56] Caching tarball of preloaded images
	I0729 11:34:48.832892   55365 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:34:48.832902   55365 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:34:48.832985   55365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/config.json ...
	I0729 11:34:48.833002   55365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/config.json: {Name:mkc9ec4c4f3eac623af233407469304c6b526181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:34:48.833125   55365 start.go:360] acquireMachinesLock for calico-184479: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:34:54.350530   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:54.351084   54676 main.go:141] libmachine: (auto-184479) DBG | unable to find current IP address of domain auto-184479 in network mk-auto-184479
	I0729 11:34:54.351108   54676 main.go:141] libmachine: (auto-184479) DBG | I0729 11:34:54.351019   55076 retry.go:31] will retry after 3.735950942s: waiting for machine to come up
	I0729 11:34:58.088969   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.089438   54676 main.go:141] libmachine: (auto-184479) Found IP for machine: 192.168.39.78
	I0729 11:34:58.089482   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has current primary IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.089495   54676 main.go:141] libmachine: (auto-184479) Reserving static IP address...
	I0729 11:34:58.089890   54676 main.go:141] libmachine: (auto-184479) DBG | unable to find host DHCP lease matching {name: "auto-184479", mac: "52:54:00:17:a6:79", ip: "192.168.39.78"} in network mk-auto-184479
	I0729 11:34:58.163962   54676 main.go:141] libmachine: (auto-184479) DBG | Getting to WaitForSSH function...
	I0729 11:34:58.164010   54676 main.go:141] libmachine: (auto-184479) Reserved static IP address: 192.168.39.78
	I0729 11:34:58.164023   54676 main.go:141] libmachine: (auto-184479) Waiting for SSH to be available...
	I0729 11:34:58.166969   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.167440   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.167474   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.167637   54676 main.go:141] libmachine: (auto-184479) DBG | Using SSH client type: external
	I0729 11:34:58.167666   54676 main.go:141] libmachine: (auto-184479) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa (-rw-------)
	I0729 11:34:58.167714   54676 main.go:141] libmachine: (auto-184479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:34:58.167729   54676 main.go:141] libmachine: (auto-184479) DBG | About to run SSH command:
	I0729 11:34:58.167762   54676 main.go:141] libmachine: (auto-184479) DBG | exit 0
	I0729 11:34:58.290805   54676 main.go:141] libmachine: (auto-184479) DBG | SSH cmd err, output: <nil>: 
	I0729 11:34:58.291065   54676 main.go:141] libmachine: (auto-184479) KVM machine creation complete!
	I0729 11:34:58.291404   54676 main.go:141] libmachine: (auto-184479) Calling .GetConfigRaw
	I0729 11:34:58.291952   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:58.292108   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:58.292293   54676 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:34:58.292309   54676 main.go:141] libmachine: (auto-184479) Calling .GetState
	I0729 11:34:58.293536   54676 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:34:58.293551   54676 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:34:58.293558   54676 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:34:58.293566   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.295984   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.296355   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.296377   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.296512   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:58.296657   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.296819   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.296993   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:58.297163   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:58.297356   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:58.297366   54676 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:34:58.402148   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:34:58.402179   54676 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:34:58.402192   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.404888   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.405292   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.405318   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.405438   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:58.405623   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.405793   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.405911   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:58.406086   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:58.406247   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:58.406258   54676 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:34:58.511418   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:34:58.511528   54676 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:34:58.511545   54676 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:34:58.511556   54676 main.go:141] libmachine: (auto-184479) Calling .GetMachineName
	I0729 11:34:58.511796   54676 buildroot.go:166] provisioning hostname "auto-184479"
	I0729 11:34:58.511820   54676 main.go:141] libmachine: (auto-184479) Calling .GetMachineName
	I0729 11:34:58.511995   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.514732   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.515221   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.515245   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.515478   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:58.515654   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.515810   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.515949   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:58.516311   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:58.516533   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:58.516548   54676 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-184479 && echo "auto-184479" | sudo tee /etc/hostname
	I0729 11:34:58.637990   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-184479
	
	I0729 11:34:58.638024   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.641106   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.641454   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.641485   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.641728   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:58.642011   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.642207   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.642368   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:58.642554   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:58.642779   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:58.642803   54676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-184479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-184479/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-184479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:34:59.588035   55120 start.go:364] duration metric: took 22.904192673s to acquireMachinesLock for "kindnet-184479"
	I0729 11:34:59.588096   55120 start.go:93] Provisioning new machine with config: &{Name:kindnet-184479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:kindnet-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:34:59.588277   55120 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 11:34:58.756024   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:34:58.756051   54676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:34:58.756071   54676 buildroot.go:174] setting up certificates
	I0729 11:34:58.756083   54676 provision.go:84] configureAuth start
	I0729 11:34:58.756095   54676 main.go:141] libmachine: (auto-184479) Calling .GetMachineName
	I0729 11:34:58.756402   54676 main.go:141] libmachine: (auto-184479) Calling .GetIP
	I0729 11:34:58.759029   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.759335   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.759362   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.759504   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.761677   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.762057   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.762087   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.762236   54676 provision.go:143] copyHostCerts
	I0729 11:34:58.762290   54676 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:34:58.762303   54676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:34:58.762387   54676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:34:58.762505   54676 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:34:58.762517   54676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:34:58.762548   54676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:34:58.762638   54676 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:34:58.762648   54676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:34:58.762674   54676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:34:58.762780   54676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.auto-184479 san=[127.0.0.1 192.168.39.78 auto-184479 localhost minikube]
	I0729 11:34:58.902280   54676 provision.go:177] copyRemoteCerts
	I0729 11:34:58.902335   54676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:34:58.902361   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:58.904858   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.905196   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:58.905237   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:58.905383   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:58.905571   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:58.905724   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:58.905853   54676 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa Username:docker}
	I0729 11:34:58.989419   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:34:59.015031   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:34:59.043074   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0729 11:34:59.070539   54676 provision.go:87] duration metric: took 314.443173ms to configureAuth
	I0729 11:34:59.070571   54676 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:34:59.070754   54676 config.go:182] Loaded profile config "auto-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:59.070818   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:59.073440   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.073766   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.073795   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.073949   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:59.074156   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.074307   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.074437   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:59.074610   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:59.074813   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:59.074834   54676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:34:59.342345   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:34:59.342377   54676 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:34:59.342388   54676 main.go:141] libmachine: (auto-184479) Calling .GetURL
	I0729 11:34:59.343710   54676 main.go:141] libmachine: (auto-184479) DBG | Using libvirt version 6000000
	I0729 11:34:59.345744   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.346102   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.346137   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.346286   54676 main.go:141] libmachine: Docker is up and running!
	I0729 11:34:59.346304   54676 main.go:141] libmachine: Reticulating splines...
	I0729 11:34:59.346311   54676 client.go:171] duration metric: took 25.286758221s to LocalClient.Create
	I0729 11:34:59.346333   54676 start.go:167] duration metric: took 25.286818391s to libmachine.API.Create "auto-184479"
	I0729 11:34:59.346341   54676 start.go:293] postStartSetup for "auto-184479" (driver="kvm2")
	I0729 11:34:59.346350   54676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:34:59.346371   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:59.346588   54676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:34:59.346611   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:59.348558   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.348839   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.348871   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.348981   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:59.349149   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.349297   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:59.349451   54676 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa Username:docker}
	I0729 11:34:59.433874   54676 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:34:59.438615   54676 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:34:59.438639   54676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:34:59.438687   54676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:34:59.438789   54676 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:34:59.438874   54676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:34:59.448605   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:34:59.473403   54676 start.go:296] duration metric: took 127.048945ms for postStartSetup
	I0729 11:34:59.473462   54676 main.go:141] libmachine: (auto-184479) Calling .GetConfigRaw
	I0729 11:34:59.474054   54676 main.go:141] libmachine: (auto-184479) Calling .GetIP
	I0729 11:34:59.476744   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.477088   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.477105   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.477396   54676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/config.json ...
	I0729 11:34:59.477567   54676 start.go:128] duration metric: took 25.505299333s to createHost
	I0729 11:34:59.477593   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:59.479814   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.480127   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.480159   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.480368   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:59.480547   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.480740   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.480881   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:59.481080   54676 main.go:141] libmachine: Using SSH client type: native
	I0729 11:34:59.481347   54676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0729 11:34:59.481369   54676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:34:59.587862   54676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252899.560353550
	
	I0729 11:34:59.587886   54676 fix.go:216] guest clock: 1722252899.560353550
	I0729 11:34:59.587905   54676 fix.go:229] Guest: 2024-07-29 11:34:59.56035355 +0000 UTC Remote: 2024-07-29 11:34:59.477580142 +0000 UTC m=+50.787512074 (delta=82.773408ms)
	I0729 11:34:59.587931   54676 fix.go:200] guest clock delta is within tolerance: 82.773408ms
	I0729 11:34:59.587941   54676 start.go:83] releasing machines lock for "auto-184479", held for 25.615879857s
	I0729 11:34:59.587974   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:59.588270   54676 main.go:141] libmachine: (auto-184479) Calling .GetIP
	I0729 11:34:59.591070   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.591491   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.591522   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.591708   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:59.592333   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:59.592551   54676 main.go:141] libmachine: (auto-184479) Calling .DriverName
	I0729 11:34:59.592650   54676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:34:59.592689   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:59.592776   54676 ssh_runner.go:195] Run: cat /version.json
	I0729 11:34:59.592817   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHHostname
	I0729 11:34:59.595683   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.595903   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.595995   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.596019   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.596204   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:59.596312   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:34:59.596333   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:34:59.596375   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.596574   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:59.596578   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHPort
	I0729 11:34:59.596747   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHKeyPath
	I0729 11:34:59.596754   54676 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa Username:docker}
	I0729 11:34:59.596872   54676 main.go:141] libmachine: (auto-184479) Calling .GetSSHUsername
	I0729 11:34:59.597004   54676 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/auto-184479/id_rsa Username:docker}
	I0729 11:34:59.701108   54676 ssh_runner.go:195] Run: systemctl --version
	I0729 11:34:59.708963   54676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:34:59.890412   54676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:34:59.897427   54676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:34:59.897502   54676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:34:59.914296   54676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:34:59.914324   54676 start.go:495] detecting cgroup driver to use...
	I0729 11:34:59.914398   54676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:34:59.936357   54676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:34:59.954461   54676 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:34:59.954540   54676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:34:59.969469   54676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:34:59.983091   54676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:35:00.110819   54676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:35:00.267438   54676 docker.go:233] disabling docker service ...
	I0729 11:35:00.267491   54676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:35:00.289826   54676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:35:00.307310   54676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:35:00.453877   54676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:35:00.583813   54676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:35:00.599378   54676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:35:00.620527   54676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:35:00.620589   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.631493   54676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:35:00.631556   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.642631   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.653733   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.664969   54676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:35:00.676195   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.687276   54676 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.713604   54676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:00.728556   54676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:35:00.740911   54676 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:35:00.741055   54676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:35:00.756249   54676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:35:00.769227   54676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:35:00.919185   54676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:35:01.076833   54676 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:35:01.076907   54676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:35:01.082461   54676 start.go:563] Will wait 60s for crictl version
	I0729 11:35:01.082556   54676 ssh_runner.go:195] Run: which crictl
	I0729 11:35:01.086591   54676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:35:01.134877   54676 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:35:01.134962   54676 ssh_runner.go:195] Run: crio --version
	I0729 11:35:01.177552   54676 ssh_runner.go:195] Run: crio --version
	I0729 11:35:01.217832   54676 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:34:59.437826   54471 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6 fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606 1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8 02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2 ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5 46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c 2a9e43dbf0cad314eabc7066e0ddfc23d008b1295bf7de2d3890ae76ebdc8779 44654efa8137ba72b4408b334440f2e7a6c01b81b19b238b73ef5b8d719cfbae 9199fe3dfa0bd106bf90ed32397cbede811cb6ecb474b10097c37ee3ce50d79f 5306108860ba1c1d3ea60cfe162e8ce223f2689df224919390b8928367e555f0 f20459fac0b7ac8d9ea13916e2a3e05c8ddf2c2d2d2f5cb4885a140684287cf4: (20.040569015s)
	W0729 11:34:59.437902   54471 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6 fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606 1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8 02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2 ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5 46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c 2a9e43dbf0cad314eabc7066e0ddfc23d008b1295bf7de2d3890ae76ebdc8779 44654efa8137ba72b4408b334440f2e7a6c01b81b19b238b73ef5b8d719cfbae 9199fe3dfa0bd106bf90ed32397cbede811cb6ecb474b10097c37ee3ce50d79f 5306108860ba1c1d3ea60cfe162e8ce223f2689df224919390b8928367e555f0 f20459fac0b7ac8d9ea13916e2a3e05c8ddf2c2d2d2f5cb4885a140684287cf4: Process exited with status 1
	stdout:
	269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6
	fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606
	1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8
	02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2
	ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc
	ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5
	
	stderr:
	E0729 11:34:59.431503    2935 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c\": container with ID starting with 46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c not found: ID does not exist" containerID="46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c"
	time="2024-07-29T11:34:59Z" level=fatal msg="stopping the container \"46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c\": rpc error: code = NotFound desc = could not find container \"46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c\": container with ID starting with 46f5fbe03db2a8af06941692dbfa97c142670b78b5f8ad323dc67ce8d8905b5c not found: ID does not exist"
	I0729 11:34:59.437999   54471 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:34:59.484848   54471 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:34:59.497340   54471 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 29 11:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 29 11:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 29 11:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 29 11:33 /etc/kubernetes/scheduler.conf
	
	I0729 11:34:59.497397   54471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:34:59.508800   54471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:34:59.520154   54471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:34:59.529887   54471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:34:59.529943   54471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:34:59.539950   54471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:34:59.551107   54471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:34:59.551173   54471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:34:59.561108   54471 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:34:59.571450   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:34:59.646222   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:35:01.130026   54471 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.483765391s)
	I0729 11:35:01.130098   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:35:01.396166   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:35:01.481956   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:35:01.719923   54471 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:35:01.720021   54471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:35:02.220840   54471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:35:01.219598   54676 main.go:141] libmachine: (auto-184479) Calling .GetIP
	I0729 11:35:01.222738   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:35:01.223189   54676 main.go:141] libmachine: (auto-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:a6:79", ip: ""} in network mk-auto-184479: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:50 +0000 UTC Type:0 Mac:52:54:00:17:a6:79 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:auto-184479 Clientid:01:52:54:00:17:a6:79}
	I0729 11:35:01.223250   54676 main.go:141] libmachine: (auto-184479) DBG | domain auto-184479 has defined IP address 192.168.39.78 and MAC address 52:54:00:17:a6:79 in network mk-auto-184479
	I0729 11:35:01.223473   54676 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:35:01.231679   54676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:35:01.252941   54676 kubeadm.go:883] updating cluster {Name:auto-184479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:auto-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:35:01.253118   54676 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:35:01.253689   54676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:35:01.292431   54676 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:35:01.292518   54676 ssh_runner.go:195] Run: which lz4
	I0729 11:35:01.298863   54676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:35:01.305924   54676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:35:01.305963   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:35:02.964847   54676 crio.go:462] duration metric: took 1.66603819s to copy over tarball
	I0729 11:35:02.964948   54676 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:34:59.590468   55120 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 11:34:59.590661   55120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:34:59.590736   55120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:34:59.610471   55120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I0729 11:34:59.610996   55120 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:34:59.611635   55120 main.go:141] libmachine: Using API Version  1
	I0729 11:34:59.611680   55120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:34:59.612045   55120 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:34:59.612230   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetMachineName
	I0729 11:34:59.612421   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:34:59.612613   55120 start.go:159] libmachine.API.Create for "kindnet-184479" (driver="kvm2")
	I0729 11:34:59.612644   55120 client.go:168] LocalClient.Create starting
	I0729 11:34:59.612702   55120 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 11:34:59.612744   55120 main.go:141] libmachine: Decoding PEM data...
	I0729 11:34:59.612767   55120 main.go:141] libmachine: Parsing certificate...
	I0729 11:34:59.612845   55120 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 11:34:59.612873   55120 main.go:141] libmachine: Decoding PEM data...
	I0729 11:34:59.612900   55120 main.go:141] libmachine: Parsing certificate...
	I0729 11:34:59.612926   55120 main.go:141] libmachine: Running pre-create checks...
	I0729 11:34:59.612942   55120 main.go:141] libmachine: (kindnet-184479) Calling .PreCreateCheck
	I0729 11:34:59.613364   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetConfigRaw
	I0729 11:34:59.613806   55120 main.go:141] libmachine: Creating machine...
	I0729 11:34:59.613832   55120 main.go:141] libmachine: (kindnet-184479) Calling .Create
	I0729 11:34:59.613983   55120 main.go:141] libmachine: (kindnet-184479) Creating KVM machine...
	I0729 11:34:59.615552   55120 main.go:141] libmachine: (kindnet-184479) DBG | found existing default KVM network
	I0729 11:34:59.616745   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:34:59.616551   55451 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a8:3b:42} reservation:<nil>}
	I0729 11:34:59.617432   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:34:59.617334   55451 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:0d:5d:21} reservation:<nil>}
	I0729 11:34:59.618621   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:34:59.618509   55451 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030a000}
	I0729 11:34:59.618722   55120 main.go:141] libmachine: (kindnet-184479) DBG | created network xml: 
	I0729 11:34:59.618744   55120 main.go:141] libmachine: (kindnet-184479) DBG | <network>
	I0729 11:34:59.618755   55120 main.go:141] libmachine: (kindnet-184479) DBG |   <name>mk-kindnet-184479</name>
	I0729 11:34:59.618766   55120 main.go:141] libmachine: (kindnet-184479) DBG |   <dns enable='no'/>
	I0729 11:34:59.618775   55120 main.go:141] libmachine: (kindnet-184479) DBG |   
	I0729 11:34:59.618784   55120 main.go:141] libmachine: (kindnet-184479) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0729 11:34:59.618805   55120 main.go:141] libmachine: (kindnet-184479) DBG |     <dhcp>
	I0729 11:34:59.618820   55120 main.go:141] libmachine: (kindnet-184479) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0729 11:34:59.618830   55120 main.go:141] libmachine: (kindnet-184479) DBG |     </dhcp>
	I0729 11:34:59.618836   55120 main.go:141] libmachine: (kindnet-184479) DBG |   </ip>
	I0729 11:34:59.618844   55120 main.go:141] libmachine: (kindnet-184479) DBG |   
	I0729 11:34:59.618850   55120 main.go:141] libmachine: (kindnet-184479) DBG | </network>
	I0729 11:34:59.618861   55120 main.go:141] libmachine: (kindnet-184479) DBG | 
	I0729 11:34:59.624665   55120 main.go:141] libmachine: (kindnet-184479) DBG | trying to create private KVM network mk-kindnet-184479 192.168.61.0/24...
	I0729 11:34:59.696581   55120 main.go:141] libmachine: (kindnet-184479) DBG | private KVM network mk-kindnet-184479 192.168.61.0/24 created
	I0729 11:34:59.696614   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:34:59.696538   55451 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:34:59.696627   55120 main.go:141] libmachine: (kindnet-184479) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479 ...
	I0729 11:34:59.696659   55120 main.go:141] libmachine: (kindnet-184479) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 11:34:59.696776   55120 main.go:141] libmachine: (kindnet-184479) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 11:34:59.958470   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:34:59.958348   55451 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa...
	I0729 11:35:00.071095   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:00.070945   55451 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/kindnet-184479.rawdisk...
	I0729 11:35:00.071150   55120 main.go:141] libmachine: (kindnet-184479) DBG | Writing magic tar header
	I0729 11:35:00.071167   55120 main.go:141] libmachine: (kindnet-184479) DBG | Writing SSH key tar header
	I0729 11:35:00.071181   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:00.071100   55451 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479 ...
	I0729 11:35:00.071245   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479
	I0729 11:35:00.071332   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479 (perms=drwx------)
	I0729 11:35:00.071353   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 11:35:00.071365   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 11:35:00.071376   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:35:00.071390   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 11:35:00.071408   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 11:35:00.071422   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 11:35:00.071432   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 11:35:00.071503   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 11:35:00.071536   55120 main.go:141] libmachine: (kindnet-184479) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 11:35:00.071548   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home/jenkins
	I0729 11:35:00.071562   55120 main.go:141] libmachine: (kindnet-184479) DBG | Checking permissions on dir: /home
	I0729 11:35:00.071572   55120 main.go:141] libmachine: (kindnet-184479) DBG | Skipping /home - not owner
	I0729 11:35:00.071592   55120 main.go:141] libmachine: (kindnet-184479) Creating domain...
	I0729 11:35:00.072600   55120 main.go:141] libmachine: (kindnet-184479) define libvirt domain using xml: 
	I0729 11:35:00.072621   55120 main.go:141] libmachine: (kindnet-184479) <domain type='kvm'>
	I0729 11:35:00.072632   55120 main.go:141] libmachine: (kindnet-184479)   <name>kindnet-184479</name>
	I0729 11:35:00.072640   55120 main.go:141] libmachine: (kindnet-184479)   <memory unit='MiB'>3072</memory>
	I0729 11:35:00.072649   55120 main.go:141] libmachine: (kindnet-184479)   <vcpu>2</vcpu>
	I0729 11:35:00.072666   55120 main.go:141] libmachine: (kindnet-184479)   <features>
	I0729 11:35:00.072676   55120 main.go:141] libmachine: (kindnet-184479)     <acpi/>
	I0729 11:35:00.072692   55120 main.go:141] libmachine: (kindnet-184479)     <apic/>
	I0729 11:35:00.072712   55120 main.go:141] libmachine: (kindnet-184479)     <pae/>
	I0729 11:35:00.072736   55120 main.go:141] libmachine: (kindnet-184479)     
	I0729 11:35:00.072748   55120 main.go:141] libmachine: (kindnet-184479)   </features>
	I0729 11:35:00.072756   55120 main.go:141] libmachine: (kindnet-184479)   <cpu mode='host-passthrough'>
	I0729 11:35:00.072765   55120 main.go:141] libmachine: (kindnet-184479)   
	I0729 11:35:00.072773   55120 main.go:141] libmachine: (kindnet-184479)   </cpu>
	I0729 11:35:00.072784   55120 main.go:141] libmachine: (kindnet-184479)   <os>
	I0729 11:35:00.072794   55120 main.go:141] libmachine: (kindnet-184479)     <type>hvm</type>
	I0729 11:35:00.072810   55120 main.go:141] libmachine: (kindnet-184479)     <boot dev='cdrom'/>
	I0729 11:35:00.072826   55120 main.go:141] libmachine: (kindnet-184479)     <boot dev='hd'/>
	I0729 11:35:00.072839   55120 main.go:141] libmachine: (kindnet-184479)     <bootmenu enable='no'/>
	I0729 11:35:00.072847   55120 main.go:141] libmachine: (kindnet-184479)   </os>
	I0729 11:35:00.072863   55120 main.go:141] libmachine: (kindnet-184479)   <devices>
	I0729 11:35:00.072880   55120 main.go:141] libmachine: (kindnet-184479)     <disk type='file' device='cdrom'>
	I0729 11:35:00.072906   55120 main.go:141] libmachine: (kindnet-184479)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/boot2docker.iso'/>
	I0729 11:35:00.072919   55120 main.go:141] libmachine: (kindnet-184479)       <target dev='hdc' bus='scsi'/>
	I0729 11:35:00.072929   55120 main.go:141] libmachine: (kindnet-184479)       <readonly/>
	I0729 11:35:00.072940   55120 main.go:141] libmachine: (kindnet-184479)     </disk>
	I0729 11:35:00.072953   55120 main.go:141] libmachine: (kindnet-184479)     <disk type='file' device='disk'>
	I0729 11:35:00.072962   55120 main.go:141] libmachine: (kindnet-184479)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 11:35:00.072978   55120 main.go:141] libmachine: (kindnet-184479)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/kindnet-184479.rawdisk'/>
	I0729 11:35:00.072988   55120 main.go:141] libmachine: (kindnet-184479)       <target dev='hda' bus='virtio'/>
	I0729 11:35:00.073000   55120 main.go:141] libmachine: (kindnet-184479)     </disk>
	I0729 11:35:00.073008   55120 main.go:141] libmachine: (kindnet-184479)     <interface type='network'>
	I0729 11:35:00.073017   55120 main.go:141] libmachine: (kindnet-184479)       <source network='mk-kindnet-184479'/>
	I0729 11:35:00.073031   55120 main.go:141] libmachine: (kindnet-184479)       <model type='virtio'/>
	I0729 11:35:00.073044   55120 main.go:141] libmachine: (kindnet-184479)     </interface>
	I0729 11:35:00.073054   55120 main.go:141] libmachine: (kindnet-184479)     <interface type='network'>
	I0729 11:35:00.073063   55120 main.go:141] libmachine: (kindnet-184479)       <source network='default'/>
	I0729 11:35:00.073073   55120 main.go:141] libmachine: (kindnet-184479)       <model type='virtio'/>
	I0729 11:35:00.073084   55120 main.go:141] libmachine: (kindnet-184479)     </interface>
	I0729 11:35:00.073093   55120 main.go:141] libmachine: (kindnet-184479)     <serial type='pty'>
	I0729 11:35:00.073111   55120 main.go:141] libmachine: (kindnet-184479)       <target port='0'/>
	I0729 11:35:00.073127   55120 main.go:141] libmachine: (kindnet-184479)     </serial>
	I0729 11:35:00.073139   55120 main.go:141] libmachine: (kindnet-184479)     <console type='pty'>
	I0729 11:35:00.073149   55120 main.go:141] libmachine: (kindnet-184479)       <target type='serial' port='0'/>
	I0729 11:35:00.073160   55120 main.go:141] libmachine: (kindnet-184479)     </console>
	I0729 11:35:00.073168   55120 main.go:141] libmachine: (kindnet-184479)     <rng model='virtio'>
	I0729 11:35:00.073179   55120 main.go:141] libmachine: (kindnet-184479)       <backend model='random'>/dev/random</backend>
	I0729 11:35:00.073186   55120 main.go:141] libmachine: (kindnet-184479)     </rng>
	I0729 11:35:00.073212   55120 main.go:141] libmachine: (kindnet-184479)     
	I0729 11:35:00.073234   55120 main.go:141] libmachine: (kindnet-184479)     
	I0729 11:35:00.073246   55120 main.go:141] libmachine: (kindnet-184479)   </devices>
	I0729 11:35:00.073256   55120 main.go:141] libmachine: (kindnet-184479) </domain>
	I0729 11:35:00.073266   55120 main.go:141] libmachine: (kindnet-184479) 
	I0729 11:35:00.077575   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:10:d4:f3 in network default
	I0729 11:35:00.078121   55120 main.go:141] libmachine: (kindnet-184479) Ensuring networks are active...
	I0729 11:35:00.078137   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:00.078872   55120 main.go:141] libmachine: (kindnet-184479) Ensuring network default is active
	I0729 11:35:00.079155   55120 main.go:141] libmachine: (kindnet-184479) Ensuring network mk-kindnet-184479 is active
	I0729 11:35:00.079626   55120 main.go:141] libmachine: (kindnet-184479) Getting domain xml...
	I0729 11:35:00.080270   55120 main.go:141] libmachine: (kindnet-184479) Creating domain...
	I0729 11:35:01.381264   55120 main.go:141] libmachine: (kindnet-184479) Waiting to get IP...
	I0729 11:35:01.382042   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:01.382457   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:01.382514   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:01.382451   55451 retry.go:31] will retry after 251.500731ms: waiting for machine to come up
	I0729 11:35:01.635981   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:01.637037   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:01.637065   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:01.636950   55451 retry.go:31] will retry after 285.308466ms: waiting for machine to come up
	I0729 11:35:01.923724   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:01.924364   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:01.924392   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:01.924315   55451 retry.go:31] will retry after 336.487987ms: waiting for machine to come up
	I0729 11:35:02.262967   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:02.263621   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:02.263657   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:02.263520   55451 retry.go:31] will retry after 546.810498ms: waiting for machine to come up
	I0729 11:35:02.812347   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:02.812810   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:02.812836   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:02.812773   55451 retry.go:31] will retry after 556.820256ms: waiting for machine to come up
	I0729 11:35:03.371563   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:03.372067   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:03.372097   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:03.371997   55451 retry.go:31] will retry after 839.666439ms: waiting for machine to come up
	I0729 11:35:04.213162   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:04.213667   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:04.213700   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:04.213609   55451 retry.go:31] will retry after 831.209735ms: waiting for machine to come up
	I0729 11:35:02.720248   54471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:35:02.772992   54471 api_server.go:72] duration metric: took 1.053069332s to wait for apiserver process to appear ...
	I0729 11:35:02.773020   54471 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:35:02.773047   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:05.423503   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:35:05.423539   54471 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:35:05.423555   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:05.434759   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:35:05.434793   54471 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:35:05.773162   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:05.779279   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:35:05.779310   54471 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:35:06.273951   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:06.279939   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:35:06.279962   54471 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:35:06.773198   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:06.779745   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:35:06.779768   54471 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:35:07.273925   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:07.279513   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 11:35:07.286405   54471 api_server.go:141] control plane version: v1.30.3
	I0729 11:35:07.286440   54471 api_server.go:131] duration metric: took 4.513412734s to wait for apiserver health ...
	I0729 11:35:07.286452   54471 cni.go:84] Creating CNI manager for ""
	I0729 11:35:07.286462   54471 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:35:07.471446   54471 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:35:05.735820   54676 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.770846279s)
	I0729 11:35:05.735849   54676 crio.go:469] duration metric: took 2.770952258s to extract the tarball
	I0729 11:35:05.735859   54676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:35:05.782200   54676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:35:05.837344   54676 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:35:05.837364   54676 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:35:05.837373   54676 kubeadm.go:934] updating node { 192.168.39.78 8443 v1.30.3 crio true true} ...
	I0729 11:35:05.837482   54676 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-184479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:auto-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:35:05.837570   54676 ssh_runner.go:195] Run: crio config
	I0729 11:35:05.882600   54676 cni.go:84] Creating CNI manager for ""
	I0729 11:35:05.882622   54676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:35:05.882633   54676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:35:05.882655   54676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-184479 NodeName:auto-184479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:35:05.882855   54676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-184479"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:35:05.882936   54676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:35:05.893016   54676 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:35:05.893090   54676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:35:05.903442   54676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0729 11:35:05.920790   54676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:35:05.937622   54676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0729 11:35:05.955864   54676 ssh_runner.go:195] Run: grep 192.168.39.78	control-plane.minikube.internal$ /etc/hosts
	I0729 11:35:05.960526   54676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:35:05.973924   54676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:35:06.105218   54676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:35:06.122693   54676 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479 for IP: 192.168.39.78
	I0729 11:35:06.122728   54676 certs.go:194] generating shared ca certs ...
	I0729 11:35:06.122753   54676 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.122923   54676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:35:06.122976   54676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:35:06.122988   54676 certs.go:256] generating profile certs ...
	I0729 11:35:06.123053   54676 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.key
	I0729 11:35:06.123070   54676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt with IP's: []
	I0729 11:35:06.268361   54676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt ...
	I0729 11:35:06.268388   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: {Name:mk441be0f438f4cc71c9daed3645b6bb59ec29e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.268551   54676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.key ...
	I0729 11:35:06.268561   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.key: {Name:mkb514f0cef69973e786a2d311c2505eb21ab3cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.268637   54676 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key.4a0b0a29
	I0729 11:35:06.268651   54676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt.4a0b0a29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78]
	I0729 11:35:06.577069   54676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt.4a0b0a29 ...
	I0729 11:35:06.577098   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt.4a0b0a29: {Name:mka5fb2b0572786db33d01b13abf5cdf5d406751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.577290   54676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key.4a0b0a29 ...
	I0729 11:35:06.577307   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key.4a0b0a29: {Name:mk8dd01ce37394dbab647df18ec9ea942c84b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.577401   54676 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt.4a0b0a29 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt
	I0729 11:35:06.577498   54676 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key.4a0b0a29 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key
	I0729 11:35:06.577554   54676 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.key
	I0729 11:35:06.577568   54676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.crt with IP's: []
	I0729 11:35:06.669314   54676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.crt ...
	I0729 11:35:06.669341   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.crt: {Name:mkcc8cc24796d50a0f84ce77a4defd101d589c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.749498   54676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.key ...
	I0729 11:35:06.749530   54676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.key: {Name:mk4f8d1cbd9bc0aed6b3ba0ba9c23d1c6fef85ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:06.749800   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:35:06.749859   54676 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:35:06.749869   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:35:06.749900   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:35:06.749932   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:35:06.749963   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:35:06.750021   54676 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:35:06.750814   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:35:06.777801   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:35:06.807185   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:35:06.856404   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:35:06.890114   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0729 11:35:06.919087   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:35:06.946491   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:35:06.973887   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:35:07.001577   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:35:07.030874   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:35:07.061191   54676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:35:07.090051   54676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:35:07.108372   54676 ssh_runner.go:195] Run: openssl version
	I0729 11:35:07.115154   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:35:07.127327   54676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:35:07.132931   54676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:35:07.133023   54676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:35:07.141065   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:35:07.157731   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:35:07.171241   54676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:35:07.176767   54676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:35:07.176858   54676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:35:07.183533   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:35:07.195718   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:35:07.208469   54676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:35:07.213945   54676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:35:07.214020   54676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:35:07.222252   54676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:35:07.238415   54676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:35:07.244705   54676 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:35:07.244766   54676 kubeadm.go:392] StartCluster: {Name:auto-184479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:auto-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:35:07.244846   54676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:35:07.244915   54676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:35:07.291535   54676 cri.go:89] found id: ""
	I0729 11:35:07.291617   54676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:35:07.302113   54676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:35:07.312323   54676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:35:07.322593   54676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:35:07.322614   54676 kubeadm.go:157] found existing configuration files:
	
	I0729 11:35:07.322659   54676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:35:07.332200   54676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:35:07.332258   54676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:35:07.342383   54676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:35:07.353207   54676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:35:07.353260   54676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:35:07.365201   54676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:35:07.375763   54676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:35:07.375830   54676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:35:07.386291   54676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:35:07.396187   54676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:35:07.396269   54676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:35:07.406330   54676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:35:07.477697   54676 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:35:07.477752   54676 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:35:07.654939   54676 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:35:07.655101   54676 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:35:07.655214   54676 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:35:07.883739   54676 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:35:08.000194   54676 out.go:204]   - Generating certificates and keys ...
	I0729 11:35:08.000325   54676 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:35:08.000406   54676 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:35:08.029753   54676 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 11:35:08.108140   54676 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 11:35:08.193580   54676 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 11:35:08.310599   54676 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 11:35:08.590874   54676 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 11:35:08.591176   54676 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-184479 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I0729 11:35:08.672484   54676 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 11:35:08.672835   54676 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-184479 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I0729 11:35:05.046490   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:05.047055   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:05.047085   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:05.047013   55451 retry.go:31] will retry after 1.299032255s: waiting for machine to come up
	I0729 11:35:06.348125   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:06.348512   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:06.348541   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:06.348465   55451 retry.go:31] will retry after 1.740256381s: waiting for machine to come up
	I0729 11:35:08.090046   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:08.090559   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:08.090591   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:08.090502   55451 retry.go:31] will retry after 2.171003514s: waiting for machine to come up
	I0729 11:35:08.762967   54676 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 11:35:09.107262   54676 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 11:35:09.280195   54676 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 11:35:09.280466   54676 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:35:09.459763   54676 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:35:09.560711   54676 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:35:09.813353   54676 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:35:10.015067   54676 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:35:10.212203   54676 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:35:10.212929   54676 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:35:10.215766   54676 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:35:07.569715   54471 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:35:07.584803   54471 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:35:07.613560   54471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:35:08.479265   54471 system_pods.go:59] 6 kube-system pods found
	I0729 11:35:08.479325   54471 system_pods.go:61] "coredns-7db6d8ff4d-fmbbt" [ee1d727f-0f74-4d9d-b25c-f2a885c5d965] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:35:08.479337   54471 system_pods.go:61] "etcd-pause-581851" [813e6429-8327-4b00-9b30-a3cf17beb72c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:35:08.479357   54471 system_pods.go:61] "kube-apiserver-pause-581851" [b6c94874-ccbf-4169-9f8b-504b0e97c887] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:35:08.479371   54471 system_pods.go:61] "kube-controller-manager-pause-581851" [2a916bb9-ea90-47d1-9343-2774c1d2f74c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:35:08.479382   54471 system_pods.go:61] "kube-proxy-9c8zc" [a92ad38f-257a-4364-a81c-2b6bfcb3150c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:35:08.479390   54471 system_pods.go:61] "kube-scheduler-pause-581851" [b37b3ec1-e38b-4e63-a41e-ab904d9d1246] Running
	I0729 11:35:08.479400   54471 system_pods.go:74] duration metric: took 865.816731ms to wait for pod list to return data ...
	I0729 11:35:08.479412   54471 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:35:08.871909   54471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:35:08.871951   54471 node_conditions.go:123] node cpu capacity is 2
	I0729 11:35:08.871966   54471 node_conditions.go:105] duration metric: took 392.547456ms to run NodePressure ...
	I0729 11:35:08.871994   54471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:35:09.524230   54471 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:35:09.533772   54471 kubeadm.go:739] kubelet initialised
	I0729 11:35:09.533800   54471 kubeadm.go:740] duration metric: took 9.543229ms waiting for restarted kubelet to initialise ...
	I0729 11:35:09.533810   54471 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:35:09.540269   54471 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:11.554242   54471 pod_ready.go:102] pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace has status "Ready":"False"
	I0729 11:35:12.050419   54471 pod_ready.go:92] pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:12.050451   54471 pod_ready.go:81] duration metric: took 2.510153387s for pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:12.050463   54471 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:10.217673   54676 out.go:204]   - Booting up control plane ...
	I0729 11:35:10.217792   54676 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:35:10.217898   54676 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:35:10.218010   54676 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:35:10.243731   54676 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:35:10.244843   54676 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:35:10.244937   54676 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:35:10.372396   54676 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:35:10.372498   54676 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:35:11.372815   54676 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001126854s
	I0729 11:35:11.372975   54676 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:35:10.262974   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:10.263544   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:10.263584   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:10.263496   55451 retry.go:31] will retry after 2.411239645s: waiting for machine to come up
	I0729 11:35:12.676394   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:12.677030   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:12.677061   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:12.676967   55451 retry.go:31] will retry after 2.577129835s: waiting for machine to come up
	I0729 11:35:16.373273   54676 kubeadm.go:310] [api-check] The API server is healthy after 5.001948251s
	I0729 11:35:16.386417   54676 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:35:16.400103   54676 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:35:16.434865   54676 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:35:16.435137   54676 kubeadm.go:310] [mark-control-plane] Marking the node auto-184479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:35:16.447755   54676 kubeadm.go:310] [bootstrap-token] Using token: zuvxcj.n00cmdazyfmsft3c
	I0729 11:35:14.057608   54471 pod_ready.go:102] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"False"
	I0729 11:35:16.057758   54471 pod_ready.go:102] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"False"
	I0729 11:35:16.449323   54676 out.go:204]   - Configuring RBAC rules ...
	I0729 11:35:16.449475   54676 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:35:16.456943   54676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:35:16.464635   54676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:35:16.467894   54676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:35:16.472032   54676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:35:16.480543   54676 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:35:16.779046   54676 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:35:17.211146   54676 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:35:17.777315   54676 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:35:17.778809   54676 kubeadm.go:310] 
	I0729 11:35:17.778896   54676 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:35:17.778924   54676 kubeadm.go:310] 
	I0729 11:35:17.779032   54676 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:35:17.779048   54676 kubeadm.go:310] 
	I0729 11:35:17.779098   54676 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:35:17.779185   54676 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:35:17.779257   54676 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:35:17.779269   54676 kubeadm.go:310] 
	I0729 11:35:17.779313   54676 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:35:17.779319   54676 kubeadm.go:310] 
	I0729 11:35:17.779359   54676 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:35:17.779365   54676 kubeadm.go:310] 
	I0729 11:35:17.779408   54676 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:35:17.779489   54676 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:35:17.779565   54676 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:35:17.779575   54676 kubeadm.go:310] 
	I0729 11:35:17.779665   54676 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:35:17.779771   54676 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:35:17.779788   54676 kubeadm.go:310] 
	I0729 11:35:17.779946   54676 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zuvxcj.n00cmdazyfmsft3c \
	I0729 11:35:17.780085   54676 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:35:17.780133   54676 kubeadm.go:310] 	--control-plane 
	I0729 11:35:17.780142   54676 kubeadm.go:310] 
	I0729 11:35:17.780243   54676 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:35:17.780252   54676 kubeadm.go:310] 
	I0729 11:35:17.780350   54676 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zuvxcj.n00cmdazyfmsft3c \
	I0729 11:35:17.780486   54676 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:35:17.780858   54676 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:35:17.780884   54676 cni.go:84] Creating CNI manager for ""
	I0729 11:35:17.780921   54676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:35:17.782822   54676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:35:17.784339   54676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:35:17.797167   54676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:35:17.818764   54676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:35:17.818924   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-184479 minikube.k8s.io/updated_at=2024_07_29T11_35_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=auto-184479 minikube.k8s.io/primary=true
	I0729 11:35:17.818926   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:17.851332   54676 ops.go:34] apiserver oom_adj: -16
	I0729 11:35:17.939350   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:18.440391   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:15.255703   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:15.256171   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:15.256236   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:15.256157   55451 retry.go:31] will retry after 3.226973911s: waiting for machine to come up
	I0729 11:35:18.484415   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:18.484961   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find current IP address of domain kindnet-184479 in network mk-kindnet-184479
	I0729 11:35:18.484991   55120 main.go:141] libmachine: (kindnet-184479) DBG | I0729 11:35:18.484888   55451 retry.go:31] will retry after 4.742962857s: waiting for machine to come up
	I0729 11:35:18.558919   54471 pod_ready.go:102] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"False"
	I0729 11:35:21.058030   54471 pod_ready.go:102] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"False"
	I0729 11:35:21.557883   54471 pod_ready.go:92] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:21.557904   54471 pod_ready.go:81] duration metric: took 9.507433418s for pod "etcd-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.557913   54471 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.562654   54471 pod_ready.go:92] pod "kube-apiserver-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:21.562671   54471 pod_ready.go:81] duration metric: took 4.751409ms for pod "kube-apiserver-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.562680   54471 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.567396   54471 pod_ready.go:92] pod "kube-controller-manager-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:21.567417   54471 pod_ready.go:81] duration metric: took 4.73005ms for pod "kube-controller-manager-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.567428   54471 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9c8zc" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.573533   54471 pod_ready.go:92] pod "kube-proxy-9c8zc" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:21.573553   54471 pod_ready.go:81] duration metric: took 6.117529ms for pod "kube-proxy-9c8zc" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.573565   54471 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.577880   54471 pod_ready.go:92] pod "kube-scheduler-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:21.577902   54471 pod_ready.go:81] duration metric: took 4.328068ms for pod "kube-scheduler-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:21.577908   54471 pod_ready.go:38] duration metric: took 12.044087293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:35:21.577922   54471 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:35:21.590432   54471 ops.go:34] apiserver oom_adj: -16
	I0729 11:35:21.590457   54471 kubeadm.go:597] duration metric: took 42.287441873s to restartPrimaryControlPlane
	I0729 11:35:21.590469   54471 kubeadm.go:394] duration metric: took 42.452792119s to StartCluster
	I0729 11:35:21.590489   54471 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:21.590578   54471 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:35:21.591246   54471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:35:21.591467   54471 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:35:21.591547   54471 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:35:21.591751   54471 config.go:182] Loaded profile config "pause-581851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:35:21.593428   54471 out.go:177] * Enabled addons: 
	I0729 11:35:21.593449   54471 out.go:177] * Verifying Kubernetes components...
	I0729 11:35:21.594798   54471 addons.go:510] duration metric: took 3.254769ms for enable addons: enabled=[]
	I0729 11:35:21.594916   54471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:35:21.765966   54471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:35:21.784069   54471 node_ready.go:35] waiting up to 6m0s for node "pause-581851" to be "Ready" ...
	I0729 11:35:21.786874   54471 node_ready.go:49] node "pause-581851" has status "Ready":"True"
	I0729 11:35:21.786900   54471 node_ready.go:38] duration metric: took 2.78954ms for node "pause-581851" to be "Ready" ...
	I0729 11:35:21.786910   54471 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:35:21.958767   54471 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:18.940153   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:19.439367   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:19.939376   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:20.439903   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:20.940375   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:21.439921   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:21.939567   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:22.439942   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:22.939954   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:23.439533   54676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:35:23.232174   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.232720   55120 main.go:141] libmachine: (kindnet-184479) Found IP for machine: 192.168.61.227
	I0729 11:35:23.232744   55120 main.go:141] libmachine: (kindnet-184479) Reserving static IP address...
	I0729 11:35:23.232754   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has current primary IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.233162   55120 main.go:141] libmachine: (kindnet-184479) DBG | unable to find host DHCP lease matching {name: "kindnet-184479", mac: "52:54:00:99:79:ab", ip: "192.168.61.227"} in network mk-kindnet-184479
	I0729 11:35:23.311759   55120 main.go:141] libmachine: (kindnet-184479) DBG | Getting to WaitForSSH function...
	I0729 11:35:23.311790   55120 main.go:141] libmachine: (kindnet-184479) Reserved static IP address: 192.168.61.227
	I0729 11:35:23.311804   55120 main.go:141] libmachine: (kindnet-184479) Waiting for SSH to be available...
	I0729 11:35:23.314378   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.314776   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.314806   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.314968   55120 main.go:141] libmachine: (kindnet-184479) DBG | Using SSH client type: external
	I0729 11:35:23.314993   55120 main.go:141] libmachine: (kindnet-184479) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa (-rw-------)
	I0729 11:35:23.315042   55120 main.go:141] libmachine: (kindnet-184479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:35:23.315056   55120 main.go:141] libmachine: (kindnet-184479) DBG | About to run SSH command:
	I0729 11:35:23.315071   55120 main.go:141] libmachine: (kindnet-184479) DBG | exit 0
	I0729 11:35:23.443245   55120 main.go:141] libmachine: (kindnet-184479) DBG | SSH cmd err, output: <nil>: 
	I0729 11:35:23.443544   55120 main.go:141] libmachine: (kindnet-184479) KVM machine creation complete!
	I0729 11:35:23.443844   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetConfigRaw
	I0729 11:35:23.444438   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:35:23.444626   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:35:23.444778   55120 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:35:23.444792   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetState
	I0729 11:35:23.446126   55120 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:35:23.446144   55120 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:35:23.446152   55120 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:35:23.446159   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:23.448691   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.449142   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.449190   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.449314   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:23.449485   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.449653   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.449780   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:23.449994   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:23.450195   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:23.450208   55120 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:35:23.558411   55120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:35:23.558432   55120 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:35:23.558441   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:23.561527   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.561932   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.561993   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.562096   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:23.562309   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.562491   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.562623   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:23.562796   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:23.562965   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:23.562974   55120 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:35:23.667599   55120 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:35:23.667680   55120 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:35:23.667699   55120 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:35:23.667713   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetMachineName
	I0729 11:35:23.667962   55120 buildroot.go:166] provisioning hostname "kindnet-184479"
	I0729 11:35:23.667987   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetMachineName
	I0729 11:35:23.668161   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:23.671023   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.671485   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.671517   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.671707   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:23.671891   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.672164   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.672350   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:23.672535   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:23.672730   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:23.672749   55120 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-184479 && echo "kindnet-184479" | sudo tee /etc/hostname
	I0729 11:35:23.793440   55120 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-184479
	
	I0729 11:35:23.793467   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:23.796689   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.797234   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.797260   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.797455   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:23.797680   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.797892   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:23.798096   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:23.798272   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:23.798448   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:23.798463   55120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-184479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-184479/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-184479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:35:23.912548   55120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:35:23.912582   55120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:35:23.912630   55120 buildroot.go:174] setting up certificates
	I0729 11:35:23.912643   55120 provision.go:84] configureAuth start
	I0729 11:35:23.912660   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetMachineName
	I0729 11:35:23.912946   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetIP
	I0729 11:35:23.915791   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.916147   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.916174   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.916349   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:23.918500   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.918822   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:23.918846   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:23.919079   55120 provision.go:143] copyHostCerts
	I0729 11:35:23.919138   55120 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:35:23.919151   55120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:35:23.919220   55120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:35:23.919358   55120 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:35:23.919369   55120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:35:23.919401   55120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:35:23.919486   55120 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:35:23.919497   55120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:35:23.919522   55120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:35:23.919595   55120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.kindnet-184479 san=[127.0.0.1 192.168.61.227 kindnet-184479 localhost minikube]
	I0729 11:35:24.086399   55120 provision.go:177] copyRemoteCerts
	I0729 11:35:24.086449   55120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:35:24.086471   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:24.089646   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.090043   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.090073   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.090263   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:24.090503   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.090717   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:24.090871   55120 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa Username:docker}
	I0729 11:35:24.177525   55120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0729 11:35:24.204554   55120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:35:24.230747   55120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:35:24.257148   55120 provision.go:87] duration metric: took 344.486849ms to configureAuth
	I0729 11:35:24.257183   55120 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:35:24.257455   55120 config.go:182] Loaded profile config "kindnet-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:35:24.257533   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:24.260346   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.260679   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.260706   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.260925   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:24.261130   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.261334   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.261488   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:24.261654   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:24.261927   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:24.261949   55120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:35:24.808239   55365 start.go:364] duration metric: took 35.975039372s to acquireMachinesLock for "calico-184479"
	I0729 11:35:24.808321   55365 start.go:93] Provisioning new machine with config: &{Name:calico-184479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:calico-184479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:35:24.808474   55365 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 11:35:22.356291   54471 pod_ready.go:92] pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:22.356321   54471 pod_ready.go:81] duration metric: took 397.52382ms for pod "coredns-7db6d8ff4d-fmbbt" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:22.356334   54471 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:22.754759   54471 pod_ready.go:92] pod "etcd-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:22.754783   54471 pod_ready.go:81] duration metric: took 398.442251ms for pod "etcd-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:22.754793   54471 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.154795   54471 pod_ready.go:92] pod "kube-apiserver-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:23.154818   54471 pod_ready.go:81] duration metric: took 400.019014ms for pod "kube-apiserver-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.154830   54471 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.554929   54471 pod_ready.go:92] pod "kube-controller-manager-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:23.554954   54471 pod_ready.go:81] duration metric: took 400.116953ms for pod "kube-controller-manager-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.554965   54471 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9c8zc" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.955191   54471 pod_ready.go:92] pod "kube-proxy-9c8zc" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:23.955219   54471 pod_ready.go:81] duration metric: took 400.247192ms for pod "kube-proxy-9c8zc" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:23.955233   54471 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:24.355613   54471 pod_ready.go:92] pod "kube-scheduler-pause-581851" in "kube-system" namespace has status "Ready":"True"
	I0729 11:35:24.355635   54471 pod_ready.go:81] duration metric: took 400.395216ms for pod "kube-scheduler-pause-581851" in "kube-system" namespace to be "Ready" ...
	I0729 11:35:24.355642   54471 pod_ready.go:38] duration metric: took 2.568722382s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:35:24.355655   54471 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:35:24.355700   54471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:35:24.371312   54471 api_server.go:72] duration metric: took 2.779816745s to wait for apiserver process to appear ...
	I0729 11:35:24.371347   54471 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:35:24.371370   54471 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 11:35:24.375969   54471 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 11:35:24.376964   54471 api_server.go:141] control plane version: v1.30.3
	I0729 11:35:24.376983   54471 api_server.go:131] duration metric: took 5.628266ms to wait for apiserver health ...
	I0729 11:35:24.376993   54471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:35:24.558536   54471 system_pods.go:59] 6 kube-system pods found
	I0729 11:35:24.558566   54471 system_pods.go:61] "coredns-7db6d8ff4d-fmbbt" [ee1d727f-0f74-4d9d-b25c-f2a885c5d965] Running
	I0729 11:35:24.558572   54471 system_pods.go:61] "etcd-pause-581851" [813e6429-8327-4b00-9b30-a3cf17beb72c] Running
	I0729 11:35:24.558575   54471 system_pods.go:61] "kube-apiserver-pause-581851" [b6c94874-ccbf-4169-9f8b-504b0e97c887] Running
	I0729 11:35:24.558579   54471 system_pods.go:61] "kube-controller-manager-pause-581851" [2a916bb9-ea90-47d1-9343-2774c1d2f74c] Running
	I0729 11:35:24.558582   54471 system_pods.go:61] "kube-proxy-9c8zc" [a92ad38f-257a-4364-a81c-2b6bfcb3150c] Running
	I0729 11:35:24.558585   54471 system_pods.go:61] "kube-scheduler-pause-581851" [b37b3ec1-e38b-4e63-a41e-ab904d9d1246] Running
	I0729 11:35:24.558590   54471 system_pods.go:74] duration metric: took 181.592074ms to wait for pod list to return data ...
	I0729 11:35:24.558599   54471 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:35:24.754913   54471 default_sa.go:45] found service account: "default"
	I0729 11:35:24.754946   54471 default_sa.go:55] duration metric: took 196.341673ms for default service account to be created ...
	I0729 11:35:24.754958   54471 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:35:24.960193   54471 system_pods.go:86] 6 kube-system pods found
	I0729 11:35:24.960225   54471 system_pods.go:89] "coredns-7db6d8ff4d-fmbbt" [ee1d727f-0f74-4d9d-b25c-f2a885c5d965] Running
	I0729 11:35:24.960232   54471 system_pods.go:89] "etcd-pause-581851" [813e6429-8327-4b00-9b30-a3cf17beb72c] Running
	I0729 11:35:24.960245   54471 system_pods.go:89] "kube-apiserver-pause-581851" [b6c94874-ccbf-4169-9f8b-504b0e97c887] Running
	I0729 11:35:24.960251   54471 system_pods.go:89] "kube-controller-manager-pause-581851" [2a916bb9-ea90-47d1-9343-2774c1d2f74c] Running
	I0729 11:35:24.960258   54471 system_pods.go:89] "kube-proxy-9c8zc" [a92ad38f-257a-4364-a81c-2b6bfcb3150c] Running
	I0729 11:35:24.960263   54471 system_pods.go:89] "kube-scheduler-pause-581851" [b37b3ec1-e38b-4e63-a41e-ab904d9d1246] Running
	I0729 11:35:24.960281   54471 system_pods.go:126] duration metric: took 205.307612ms to wait for k8s-apps to be running ...
	I0729 11:35:24.960294   54471 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:35:24.960343   54471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:35:24.980671   54471 system_svc.go:56] duration metric: took 20.366233ms WaitForService to wait for kubelet
	I0729 11:35:24.980705   54471 kubeadm.go:582] duration metric: took 3.389213924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:35:24.980729   54471 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:35:25.156238   54471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:35:25.156267   54471 node_conditions.go:123] node cpu capacity is 2
	I0729 11:35:25.156277   54471 node_conditions.go:105] duration metric: took 175.54223ms to run NodePressure ...
	I0729 11:35:25.156293   54471 start.go:241] waiting for startup goroutines ...
	I0729 11:35:25.156305   54471 start.go:246] waiting for cluster config update ...
	I0729 11:35:25.156315   54471 start.go:255] writing updated cluster config ...
	I0729 11:35:25.156649   54471 ssh_runner.go:195] Run: rm -f paused
	I0729 11:35:25.209606   54471 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:35:25.211704   54471 out.go:177] * Done! kubectl is now configured to use "pause-581851" cluster and "default" namespace by default
	I0729 11:35:24.557651   55120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:35:24.557683   55120 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:35:24.557694   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetURL
	I0729 11:35:24.559062   55120 main.go:141] libmachine: (kindnet-184479) DBG | Using libvirt version 6000000
	I0729 11:35:24.561356   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.561716   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.561744   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.561934   55120 main.go:141] libmachine: Docker is up and running!
	I0729 11:35:24.561954   55120 main.go:141] libmachine: Reticulating splines...
	I0729 11:35:24.561961   55120 client.go:171] duration metric: took 24.949304891s to LocalClient.Create
	I0729 11:35:24.561984   55120 start.go:167] duration metric: took 24.949371969s to libmachine.API.Create "kindnet-184479"
	I0729 11:35:24.561993   55120 start.go:293] postStartSetup for "kindnet-184479" (driver="kvm2")
	I0729 11:35:24.562002   55120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:35:24.562016   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:35:24.562261   55120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:35:24.562287   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:24.564683   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.565037   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.565065   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.565270   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:24.565455   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.565648   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:24.565811   55120 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa Username:docker}
	I0729 11:35:24.650462   55120 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:35:24.655181   55120 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:35:24.655203   55120 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:35:24.655268   55120 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:35:24.655342   55120 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:35:24.655448   55120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:35:24.665730   55120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:35:24.692543   55120 start.go:296] duration metric: took 130.535492ms for postStartSetup
	I0729 11:35:24.692592   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetConfigRaw
	I0729 11:35:24.693207   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetIP
	I0729 11:35:24.696160   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.696673   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.696704   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.696970   55120 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/config.json ...
	I0729 11:35:24.697207   55120 start.go:128] duration metric: took 25.108915415s to createHost
	I0729 11:35:24.697238   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:24.700010   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.700455   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.700482   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.700658   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:24.700871   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.701013   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.701147   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:24.701391   55120 main.go:141] libmachine: Using SSH client type: native
	I0729 11:35:24.701603   55120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0729 11:35:24.701616   55120 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:35:24.808024   55120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252924.751705939
	
	I0729 11:35:24.808054   55120 fix.go:216] guest clock: 1722252924.751705939
	I0729 11:35:24.808064   55120 fix.go:229] Guest: 2024-07-29 11:35:24.751705939 +0000 UTC Remote: 2024-07-29 11:35:24.697225285 +0000 UTC m=+50.326840498 (delta=54.480654ms)
	I0729 11:35:24.808098   55120 fix.go:200] guest clock delta is within tolerance: 54.480654ms
	I0729 11:35:24.808112   55120 start.go:83] releasing machines lock for "kindnet-184479", held for 25.220050039s
	I0729 11:35:24.808139   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:35:24.808436   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetIP
	I0729 11:35:24.811778   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.812140   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.812183   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.812379   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:35:24.812973   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:35:24.813166   55120 main.go:141] libmachine: (kindnet-184479) Calling .DriverName
	I0729 11:35:24.813260   55120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:35:24.813301   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:24.813378   55120 ssh_runner.go:195] Run: cat /version.json
	I0729 11:35:24.813402   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHHostname
	I0729 11:35:24.816087   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.816508   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.816538   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.816596   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.816748   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:24.816968   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.817085   55120 main.go:141] libmachine: (kindnet-184479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:79:ab", ip: ""} in network mk-kindnet-184479: {Iface:virbr2 ExpiryTime:2024-07-29 12:35:15 +0000 UTC Type:0 Mac:52:54:00:99:79:ab Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kindnet-184479 Clientid:01:52:54:00:99:79:ab}
	I0729 11:35:24.817115   55120 main.go:141] libmachine: (kindnet-184479) DBG | domain kindnet-184479 has defined IP address 192.168.61.227 and MAC address 52:54:00:99:79:ab in network mk-kindnet-184479
	I0729 11:35:24.817193   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:24.817376   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHPort
	I0729 11:35:24.817404   55120 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa Username:docker}
	I0729 11:35:24.817532   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHKeyPath
	I0729 11:35:24.817707   55120 main.go:141] libmachine: (kindnet-184479) Calling .GetSSHUsername
	I0729 11:35:24.817841   55120 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/kindnet-184479/id_rsa Username:docker}
	I0729 11:35:24.927562   55120 ssh_runner.go:195] Run: systemctl --version
	I0729 11:35:24.935612   55120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:35:25.110839   55120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:35:25.117240   55120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:35:25.117317   55120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:35:25.134120   55120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:35:25.134151   55120 start.go:495] detecting cgroup driver to use...
	I0729 11:35:25.134222   55120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:35:25.158526   55120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:35:25.177146   55120 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:35:25.177206   55120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:35:25.193634   55120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:35:25.214715   55120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:35:25.354904   55120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:35:25.543886   55120 docker.go:233] disabling docker service ...
	I0729 11:35:25.543952   55120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:35:25.561180   55120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:35:25.578851   55120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:35:25.751436   55120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:35:25.905954   55120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:35:25.922934   55120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:35:25.948354   55120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:35:25.948415   55120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:25.960243   55120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:35:25.960310   55120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:25.974625   55120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:25.986655   55120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:26.002068   55120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:35:26.017909   55120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:26.032213   55120 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:26.053391   55120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:35:26.064440   55120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:35:26.075655   55120 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:35:26.075719   55120 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:35:26.089532   55120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:35:26.099395   55120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:35:26.255102   55120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:35:26.433328   55120 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:35:26.433392   55120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:35:26.438769   55120 start.go:563] Will wait 60s for crictl version
	I0729 11:35:26.438830   55120 ssh_runner.go:195] Run: which crictl
	I0729 11:35:26.443712   55120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:35:26.492391   55120 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:35:26.492499   55120 ssh_runner.go:195] Run: crio --version
	I0729 11:35:26.525734   55120 ssh_runner.go:195] Run: crio --version
	I0729 11:35:26.567044   55120 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	
	
	==> CRI-O <==
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.822058645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cd23cdd-2859-4b18-b452-bc156ba1407d name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.823962048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7732f9b-00ce-4b57-b436-0289acc19228 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.824955976Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252928824474490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7732f9b-00ce-4b57-b436-0289acc19228 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.828852375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5117afa9-11bb-4409-8870-9dad1f1a1361 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.828949604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5117afa9-11bb-4409-8870-9dad1f1a1361 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.829462061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce2191b91903c69dedc50e6b5526b98bc889f9f77dab55ed00a597f36b12d2d3,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722252907160453316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537aeb34b9b99c314d7326619460bd413833587890882a5284bf3f5ebf4d66f,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252907161483767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6ce9985e3e97a93b57d7168d5c4cd801ec8d175b84f4a8f872af5459adfd94,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252902072079926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f6fc6d1c26b6e29b83122105a8d1674c8f96551e9f8a230513541c590a88cb,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252902093327872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca187fb69109b1cdc68ed618e48b7e159e6a080ee12a804a912ac013a94583,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252902099342266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bc2aa0533bf5dfce0e67f3ce4e949df70cad348d6881c8371fa839c5964139,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252902060382068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io
.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252878067236756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314
e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252877525849539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252877397830597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252877323157001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annotations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722252877268209557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722252877203477568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5117afa9-11bb-4409-8870-9dad1f1a1361 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.876958798Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f21dacad-fb9f-4fad-9c55-705fdf9260b5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.877368304Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmbbt,Uid:ee1d727f-0f74-4d9d-b25c-f2a885c5d965,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722252877015249483,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:33:57.617279987Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&PodSandboxMetadata{Name:kube-proxy-9c8zc,Uid:a92ad38f-257a-4364-a81c-2b6bfcb3150c,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1722252876966857656,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:33:57.263055228Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-581851,Uid:db3f4ba920479cef345069fc7f03f2a6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722252876950513219,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,tier: control-plane,},Annotations:map[string
]string{kubernetes.io/config.hash: db3f4ba920479cef345069fc7f03f2a6,kubernetes.io/config.seen: 2024-07-29T11:33:44.146936659Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&PodSandboxMetadata{Name:etcd-pause-581851,Uid:3a7f9b3d084d380046bf1ab2256f9bff,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722252876896661489,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.53:2379,kubernetes.io/config.hash: 3a7f9b3d084d380046bf1ab2256f9bff,kubernetes.io/config.seen: 2024-07-29T11:33:44.146937768Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e6
38f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-581851,Uid:9e1a86c6084badc8070336a236272b37,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722252876830206130,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1a86c6084badc8070336a236272b37,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.53:8443,kubernetes.io/config.hash: 9e1a86c6084badc8070336a236272b37,kubernetes.io/config.seen: 2024-07-29T11:33:44.146876962Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-581851,Uid:1cbb85ee2a0be4e7412bf833f7ba4bf1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722252876818977560,Labels:map[string]strin
g{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1cbb85ee2a0be4e7412bf833f7ba4bf1,kubernetes.io/config.seen: 2024-07-29T11:33:44.146935076Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f21dacad-fb9f-4fad-9c55-705fdf9260b5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.878312257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1321434d-edde-466e-a7b2-a6e134097e19 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.878392800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1321434d-edde-466e-a7b2-a6e134097e19 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.878840492Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce2191b91903c69dedc50e6b5526b98bc889f9f77dab55ed00a597f36b12d2d3,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722252907160453316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537aeb34b9b99c314d7326619460bd413833587890882a5284bf3f5ebf4d66f,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252907161483767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6ce9985e3e97a93b57d7168d5c4cd801ec8d175b84f4a8f872af5459adfd94,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252902072079926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f6fc6d1c26b6e29b83122105a8d1674c8f96551e9f8a230513541c590a88cb,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252902093327872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca187fb69109b1cdc68ed618e48b7e159e6a080ee12a804a912ac013a94583,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252902099342266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bc2aa0533bf5dfce0e67f3ce4e949df70cad348d6881c8371fa839c5964139,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252902060382068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io
.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252878067236756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314
e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252877525849539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252877397830597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252877323157001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annotations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722252877268209557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722252877203477568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1321434d-edde-466e-a7b2-a6e134097e19 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.891122868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53812b59-68f3-4b55-99e7-47411201aa4b name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.891224029Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53812b59-68f3-4b55-99e7-47411201aa4b name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.892880663Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=516455ad-b15e-44c7-aaf2-f49beb223e89 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.893374285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252928893349577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=516455ad-b15e-44c7-aaf2-f49beb223e89 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.894012509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc5b3342-6af7-42bd-b57c-a34c8a6aed04 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.894085051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc5b3342-6af7-42bd-b57c-a34c8a6aed04 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.894704920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce2191b91903c69dedc50e6b5526b98bc889f9f77dab55ed00a597f36b12d2d3,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722252907160453316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537aeb34b9b99c314d7326619460bd413833587890882a5284bf3f5ebf4d66f,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252907161483767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6ce9985e3e97a93b57d7168d5c4cd801ec8d175b84f4a8f872af5459adfd94,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252902072079926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f6fc6d1c26b6e29b83122105a8d1674c8f96551e9f8a230513541c590a88cb,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252902093327872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca187fb69109b1cdc68ed618e48b7e159e6a080ee12a804a912ac013a94583,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252902099342266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bc2aa0533bf5dfce0e67f3ce4e949df70cad348d6881c8371fa839c5964139,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252902060382068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io
.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252878067236756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314
e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252877525849539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252877397830597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252877323157001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annotations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722252877268209557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722252877203477568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc5b3342-6af7-42bd-b57c-a34c8a6aed04 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.967122330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11c14a17-9801-440d-bda4-ab766f51f71f name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.967240376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11c14a17-9801-440d-bda4-ab766f51f71f name=/runtime.v1.RuntimeService/Version
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.969113154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c3988d2-e38c-4705-85fa-63971605b222 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.969754877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722252928969717150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c3988d2-e38c-4705-85fa-63971605b222 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.970839328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff6a3e29-e4e5-4fcb-9924-370ec80ca211 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.970918195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff6a3e29-e4e5-4fcb-9924-370ec80ca211 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:35:28 pause-581851 crio[2235]: time="2024-07-29 11:35:28.971282472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce2191b91903c69dedc50e6b5526b98bc889f9f77dab55ed00a597f36b12d2d3,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722252907160453316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c537aeb34b9b99c314d7326619460bd413833587890882a5284bf3f5ebf4d66f,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252907161483767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe6ce9985e3e97a93b57d7168d5c4cd801ec8d175b84f4a8f872af5459adfd94,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252902072079926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f6fc6d1c26b6e29b83122105a8d1674c8f96551e9f8a230513541c590a88cb,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252902093327872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca187fb69109b1cdc68ed618e48b7e159e6a080ee12a804a912ac013a94583,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252902099342266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bc2aa0533bf5dfce0e67f3ce4e949df70cad348d6881c8371fa839c5964139,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252902060382068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io
.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6,PodSandboxId:426cac2b8b2108f7a0422f8b0ea3670ea9dc98d5defa3394bd3232e231a45025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252878067236756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmbbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1d727f-0f74-4d9d-b25c-f2a885c5d965,},Annotations:map[string]string{io.kubernetes.container.hash: 8314
e1ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606,PodSandboxId:eed8fa4247f115fe8333ca63e9bbe532e1d92aee756307711bf2f67e29b9fb93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252877525849539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-9c8zc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92ad38f-257a-4364-a81c-2b6bfcb3150c,},Annotations:map[string]string{io.kubernetes.container.hash: c20c90d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8,PodSandboxId:19749966fb3ee76d85c3a78fb96c4444f39b20e1da71b33dcc5a70b534e64f63,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252877397830597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-581851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db3f4ba920479cef345069fc7f03f2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2,PodSandboxId:82a89953e42f6b47b34f0f245f671f8afd130fa7264ee9ae005e8c89b7d0888d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252877323157001,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-581851,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3a7f9b3d084d380046bf1ab2256f9bff,},Annotations:map[string]string{io.kubernetes.container.hash: 7f116299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc,PodSandboxId:55eaea7e342360a47031daeb322aca02ebfd5d08b17550486ad01af489270489,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722252877268209557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-581851,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1cbb85ee2a0be4e7412bf833f7ba4bf1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5,PodSandboxId:1271f1fee93b3a7e1b017731a93f2135daa15bfcf56f154b61b16c341a4e638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722252877203477568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-581851,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9e1a86c6084badc8070336a236272b37,},Annotations:map[string]string{io.kubernetes.container.hash: 49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff6a3e29-e4e5-4fcb-9924-370ec80ca211 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c537aeb34b9b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago      Running             coredns                   2                   426cac2b8b210       coredns-7db6d8ff4d-fmbbt
	ce2191b91903c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   21 seconds ago      Running             kube-proxy                2                   eed8fa4247f11       kube-proxy-9c8zc
	37ca187fb6910       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   26 seconds ago      Running             kube-controller-manager   2                   55eaea7e34236       kube-controller-manager-pause-581851
	e2f6fc6d1c26b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   26 seconds ago      Running             kube-scheduler            2                   19749966fb3ee       kube-scheduler-pause-581851
	fe6ce9985e3e9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   26 seconds ago      Running             etcd                      2                   82a89953e42f6       etcd-pause-581851
	06bc2aa0533bf       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   27 seconds ago      Running             kube-apiserver            2                   1271f1fee93b3       kube-apiserver-pause-581851
	269528986b69c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   50 seconds ago      Exited              coredns                   1                   426cac2b8b210       coredns-7db6d8ff4d-fmbbt
	fac8d87679859       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   51 seconds ago      Exited              kube-proxy                1                   eed8fa4247f11       kube-proxy-9c8zc
	1758162276bc2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   51 seconds ago      Exited              kube-scheduler            1                   19749966fb3ee       kube-scheduler-pause-581851
	02ab66aeb0736       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   51 seconds ago      Exited              etcd                      1                   82a89953e42f6       etcd-pause-581851
	ee61ceda3fe5c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   51 seconds ago      Exited              kube-controller-manager   1                   55eaea7e34236       kube-controller-manager-pause-581851
	ef298c1e5d0dc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   51 seconds ago      Exited              kube-apiserver            1                   1271f1fee93b3       kube-apiserver-pause-581851
	
	
	==> coredns [269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:38174 - 35477 "HINFO IN 1349426607110657866.891358503850990292. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014988721s
	
	
	==> coredns [c537aeb34b9b99c314d7326619460bd413833587890882a5284bf3f5ebf4d66f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60352 - 54783 "HINFO IN 2687006416712034348.3441999991459677384. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014613208s
	
	
	==> describe nodes <==
	Name:               pause-581851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-581851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=pause-581851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_33_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:33:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-581851
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:35:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:35:05 +0000   Mon, 29 Jul 2024 11:33:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:35:05 +0000   Mon, 29 Jul 2024 11:33:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:35:05 +0000   Mon, 29 Jul 2024 11:33:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:35:05 +0000   Mon, 29 Jul 2024 11:33:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.53
	  Hostname:    pause-581851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8f7a2737fab44e58754172c3a269877
	  System UUID:                a8f7a273-7fab-44e5-8754-172c3a269877
	  Boot ID:                    3a11576d-2857-48f4-bd18-4770b96a6083
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fmbbt                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     92s
	  kube-system                 etcd-pause-581851                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         105s
	  kube-system                 kube-apiserver-pause-581851             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-pause-581851    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-proxy-9c8zc                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-scheduler-pause-581851             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 20s                  kube-proxy       
	  Normal  Starting                 47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)  kubelet          Node pause-581851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)  kubelet          Node pause-581851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x7 over 112s)  kubelet          Node pause-581851 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     105s                 kubelet          Node pause-581851 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  105s                 kubelet          Node pause-581851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s                 kubelet          Node pause-581851 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeReady                104s                 kubelet          Node pause-581851 status is now: NodeReady
	  Normal  RegisteredNode           93s                  node-controller  Node pause-581851 event: Registered Node pause-581851 in Controller
	  Normal  Starting                 28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)    kubelet          Node pause-581851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)    kubelet          Node pause-581851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)    kubelet          Node pause-581851 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                   node-controller  Node pause-581851 event: Registered Node pause-581851 in Controller
	
	
	==> dmesg <==
	[  +0.056260] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.085307] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.176103] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.149418] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.283712] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.272741] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.059164] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.741518] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.637339] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.964348] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.119461] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.946958] systemd-fstab-generator[1504]: Ignoring "noauto" option for root device
	[  +0.102345] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 11:34] systemd-fstab-generator[2152]: Ignoring "noauto" option for root device
	[  +0.089001] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.072333] systemd-fstab-generator[2164]: Ignoring "noauto" option for root device
	[  +0.193055] systemd-fstab-generator[2178]: Ignoring "noauto" option for root device
	[  +0.182990] systemd-fstab-generator[2190]: Ignoring "noauto" option for root device
	[  +0.312822] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +3.058195] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +3.848860] kauditd_printk_skb: 195 callbacks suppressed
	[Jul29 11:35] systemd-fstab-generator[3239]: Ignoring "noauto" option for root device
	[  +5.938023] kauditd_printk_skb: 39 callbacks suppressed
	[ +13.126687] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.306316] systemd-fstab-generator[3671]: Ignoring "noauto" option for root device
	
	
	==> etcd [02ab66aeb07368d4573417fadebad7b3bf50bbf6db50703d8b2c899e38000fa2] <==
	{"level":"info","ts":"2024-07-29T11:34:38.280005Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"40a79b093a7e4780","initial-advertise-peer-urls":["https://192.168.50.53:2380"],"listen-peer-urls":["https://192.168.50.53:2380"],"advertise-client-urls":["https://192.168.50.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:34:39.885806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T11:34:39.885877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:34:39.885936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 received MsgPreVoteResp from 40a79b093a7e4780 at term 2"}
	{"level":"info","ts":"2024-07-29T11:34:39.885968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:39.885978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 received MsgVoteResp from 40a79b093a7e4780 at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:39.886014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:39.886029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 40a79b093a7e4780 elected leader 40a79b093a7e4780 at term 3"}
	{"level":"info","ts":"2024-07-29T11:34:39.887835Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"40a79b093a7e4780","local-member-attributes":"{Name:pause-581851 ClientURLs:[https://192.168.50.53:2379]}","request-path":"/0/members/40a79b093a7e4780/attributes","cluster-id":"1150b84c67dfd974","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:34:39.887909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:34:39.888511Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:34:39.89124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.53:2379"}
	{"level":"info","ts":"2024-07-29T11:34:39.895256Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T11:34:39.915626Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:34:39.915693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:34:49.083581Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T11:34:49.08375Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-581851","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.53:2380"],"advertise-client-urls":["https://192.168.50.53:2379"]}
	{"level":"warn","ts":"2024-07-29T11:34:49.083825Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:34:49.083924Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:34:49.096001Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.53:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T11:34:49.096059Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.53:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T11:34:49.098777Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"40a79b093a7e4780","current-leader-member-id":"40a79b093a7e4780"}
	{"level":"info","ts":"2024-07-29T11:34:49.10265Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.53:2380"}
	{"level":"info","ts":"2024-07-29T11:34:49.102778Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.53:2380"}
	{"level":"info","ts":"2024-07-29T11:34:49.102787Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-581851","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.53:2380"],"advertise-client-urls":["https://192.168.50.53:2379"]}
	
	
	==> etcd [fe6ce9985e3e97a93b57d7168d5c4cd801ec8d175b84f4a8f872af5459adfd94] <==
	{"level":"warn","ts":"2024-07-29T11:35:08.455736Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:07.552726Z","time spent":"902.905664ms","remote":"127.0.0.1:46982","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6581,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-581851\" mod_revision:448 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-581851\" value_size:6510 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-581851\" > >"}
	{"level":"warn","ts":"2024-07-29T11:35:08.862793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.609758ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5152277395478209568 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" mod_revision:483 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" value_size:4483 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T11:35:08.863033Z","caller":"traceutil/trace.go:171","msg":"trace[162885434] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:518; }","duration":"337.552997ms","start":"2024-07-29T11:35:08.525468Z","end":"2024-07-29T11:35:08.863021Z","steps":["trace[162885434] 'read index received'  (duration: 149.596872ms)","trace[162885434] 'applied index is now lower than readState.Index'  (duration: 187.955375ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T11:35:08.86325Z","caller":"traceutil/trace.go:171","msg":"trace[1192231471] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"383.274429ms","start":"2024-07-29T11:35:08.479966Z","end":"2024-07-29T11:35:08.863241Z","steps":["trace[1192231471] 'process raft request'  (duration: 195.162699ms)","trace[1192231471] 'compare'  (duration: 187.540045ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:35:08.863307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:08.47995Z","time spent":"383.323083ms","remote":"127.0.0.1:46982","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4545,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" mod_revision:483 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" value_size:4483 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-581851\" > >"}
	{"level":"warn","ts":"2024-07-29T11:35:08.863427Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:08.523047Z","time spent":"340.378001ms","remote":"127.0.0.1:47312","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-07-29T11:35:08.863526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.482053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5426"}
	{"level":"info","ts":"2024-07-29T11:35:08.86361Z","caller":"traceutil/trace.go:171","msg":"trace[1891426879] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:493; }","duration":"384.573272ms","start":"2024-07-29T11:35:08.479028Z","end":"2024-07-29T11:35:08.863602Z","steps":["trace[1891426879] 'agreement among raft nodes before linearized reading'  (duration: 384.469369ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:35:08.86363Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:08.479022Z","time spent":"384.602324ms","remote":"127.0.0.1:46978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":5448,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2024-07-29T11:35:08.863764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.394286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-07-29T11:35:08.86378Z","caller":"traceutil/trace.go:171","msg":"trace[322937798] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:493; }","duration":"334.427289ms","start":"2024-07-29T11:35:08.529347Z","end":"2024-07-29T11:35:08.863775Z","steps":["trace[322937798] 'agreement among raft nodes before linearized reading'  (duration: 334.391477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:35:08.863793Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:08.529339Z","time spent":"334.451722ms","remote":"127.0.0.1:47000","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":1,"response size":238,"request content":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" "}
	{"level":"warn","ts":"2024-07-29T11:35:08.863981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.573134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:421"}
	{"level":"info","ts":"2024-07-29T11:35:08.864017Z","caller":"traceutil/trace.go:171","msg":"trace[1592594679] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:493; }","duration":"336.02974ms","start":"2024-07-29T11:35:08.527982Z","end":"2024-07-29T11:35:08.864011Z","steps":["trace[1592594679] 'agreement among raft nodes before linearized reading'  (duration: 335.977721ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:35:08.864035Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T11:35:08.52797Z","time spent":"336.06041ms","remote":"127.0.0.1:46976","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":443,"request content":"key:\"/registry/services/endpoints/default/kubernetes\" "}
	{"level":"warn","ts":"2024-07-29T11:35:09.165839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.295125ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5152277395478209574 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-581851.17e6abe539855582\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-581851.17e6abe539855582\" value_size:462 lease:5152277395478209569 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T11:35:09.166397Z","caller":"traceutil/trace.go:171","msg":"trace[1379867667] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"298.162838ms","start":"2024-07-29T11:35:08.868215Z","end":"2024-07-29T11:35:09.166378Z","steps":["trace[1379867667] 'process raft request'  (duration: 161.258982ms)","trace[1379867667] 'compare'  (duration: 136.177654ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T11:35:09.166734Z","caller":"traceutil/trace.go:171","msg":"trace[497208888] linearizableReadLoop","detail":"{readStateIndex:521; appliedIndex:520; }","duration":"296.488462ms","start":"2024-07-29T11:35:08.870237Z","end":"2024-07-29T11:35:09.166725Z","steps":["trace[497208888] 'read index received'  (duration: 159.244913ms)","trace[497208888] 'applied index is now lower than readState.Index'  (duration: 137.242799ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T11:35:09.166892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.64501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-07-29T11:35:09.166934Z","caller":"traceutil/trace.go:171","msg":"trace[860515993] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:494; }","duration":"296.71622ms","start":"2024-07-29T11:35:08.87021Z","end":"2024-07-29T11:35:09.166926Z","steps":["trace[860515993] 'agreement among raft nodes before linearized reading'  (duration: 296.6234ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:35:09.167073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.678552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-fmbbt\" ","response":"range_response_count:1 size:5121"}
	{"level":"info","ts":"2024-07-29T11:35:09.167111Z","caller":"traceutil/trace.go:171","msg":"trace[46604132] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-fmbbt; range_end:; response_count:1; response_revision:494; }","duration":"296.726529ms","start":"2024-07-29T11:35:08.870379Z","end":"2024-07-29T11:35:09.167105Z","steps":["trace[46604132] 'agreement among raft nodes before linearized reading'  (duration: 296.66571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:35:09.167228Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.806761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-29T11:35:09.167305Z","caller":"traceutil/trace.go:171","msg":"trace[902931312] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:494; }","duration":"296.843058ms","start":"2024-07-29T11:35:08.870415Z","end":"2024-07-29T11:35:09.167258Z","steps":["trace[902931312] 'agreement among raft nodes before linearized reading'  (duration: 296.791413ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:35:09.167862Z","caller":"traceutil/trace.go:171","msg":"trace[1158866614] transaction","detail":"{read_only:false; number_of_response:0; response_revision:494; }","duration":"219.401562ms","start":"2024-07-29T11:35:08.94845Z","end":"2024-07-29T11:35:09.167852Z","steps":["trace[1158866614] 'process raft request'  (duration: 217.909951ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:35:29 up 2 min,  0 users,  load average: 1.27, 0.52, 0.19
	Linux pause-581851 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [06bc2aa0533bf5dfce0e67f3ce4e949df70cad348d6881c8371fa839c5964139] <==
	I0729 11:35:08.460645       1 trace.go:236] Trace[1018194497]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:761c23f2-b932-4a8f-a840-01cc8282b8b9,client:192.168.50.53,api-group:,api-version:v1,name:kube-controller-manager-pause-581851,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-581851/status,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PATCH (29-Jul-2024 11:35:07.546) (total time: 914ms):
	Trace[1018194497]: ["GuaranteedUpdate etcd3" audit-id:761c23f2-b932-4a8f-a840-01cc8282b8b9,key:/pods/kube-system/kube-controller-manager-pause-581851,type:*core.Pod,resource:pods 913ms (11:35:07.546)
	Trace[1018194497]:  ---"Txn call completed" 904ms (11:35:08.456)]
	Trace[1018194497]: ---"Object stored in database" 905ms (11:35:08.456)
	Trace[1018194497]: [914.113614ms] [914.113614ms] END
	I0729 11:35:08.469148       1 trace.go:236] Trace[1616495856]: "List" accept:application/json, */*,audit-id:9b0ec97f-e991-48fe-b7ce-85b0025ea281,client:192.168.50.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (29-Jul-2024 11:35:07.611) (total time: 854ms):
	Trace[1616495856]: ["List(recursive=true) etcd3" audit-id:9b0ec97f-e991-48fe-b7ce-85b0025ea281,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 858ms (11:35:07.611)]
	Trace[1616495856]: [854.485053ms] [854.485053ms] END
	I0729 11:35:08.526147       1 trace.go:236] Trace[1119242437]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.53,type:*v1.Endpoints,resource:apiServerIPInfo (29-Jul-2024 11:35:07.017) (total time: 1508ms):
	Trace[1119242437]: ---"initial value restored" 518ms (11:35:07.535)
	Trace[1119242437]: ---"Transaction prepared" 924ms (11:35:08.460)
	Trace[1119242437]: ---"Txn call completed" 65ms (11:35:08.526)
	Trace[1119242437]: [1.50897492s] [1.50897492s] END
	I0729 11:35:09.167360       1 trace.go:236] Trace[1733212293]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:353b13c6-a37b-41c2-91b6-e602cedf4ed6,client:192.168.50.53,api-group:events.k8s.io,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:events,scope:resource,url:/apis/events.k8s.io/v1/namespaces/default/events,user-agent:kube-proxy/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 11:35:08.521) (total time: 645ms):
	Trace[1733212293]: ["Create etcd3" audit-id:353b13c6-a37b-41c2-91b6-e602cedf4ed6,key:/events/default/pause-581851.17e6abe539855582,type:*core.Event,resource:events 644ms (11:35:08.522)
	Trace[1733212293]:  ---"TransformToStorage succeeded" 342ms (11:35:08.864)
	Trace[1733212293]:  ---"Txn call succeeded" 302ms (11:35:09.166)]
	Trace[1733212293]: [645.580068ms] [645.580068ms] END
	I0729 11:35:09.278375       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 11:35:09.320068       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 11:35:09.420158       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 11:35:09.490514       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 11:35:09.503298       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 11:35:20.311288       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:35:20.324398       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5] <==
	W0729 11:34:58.235038       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.352055       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.399489       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.437696       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.453513       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.489848       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.529186       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.570169       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.611006       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.617058       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.629459       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.734667       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.736094       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.769661       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.854949       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.894673       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.918103       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:58.962733       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.023851       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.060842       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.092807       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.098623       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.165861       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.231841       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:34:59.244097       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [37ca187fb69109b1cdc68ed618e48b7e159e6a080ee12a804a912ac013a94583] <==
	I0729 11:35:20.316083       1 shared_informer.go:320] Caches are synced for HPA
	I0729 11:35:20.317259       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 11:35:20.327806       1 shared_informer.go:320] Caches are synced for job
	I0729 11:35:20.329612       1 shared_informer.go:320] Caches are synced for taint
	I0729 11:35:20.329895       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 11:35:20.330034       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-581851"
	I0729 11:35:20.330104       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 11:35:20.331725       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 11:35:20.331787       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 11:35:20.334137       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 11:35:20.340635       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 11:35:20.340846       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="148.928µs"
	I0729 11:35:20.344977       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 11:35:20.345426       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 11:35:20.349730       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 11:35:20.364233       1 shared_informer.go:320] Caches are synced for deployment
	I0729 11:35:20.371802       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 11:35:20.375620       1 shared_informer.go:320] Caches are synced for disruption
	I0729 11:35:20.377962       1 shared_informer.go:320] Caches are synced for GC
	I0729 11:35:20.382618       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 11:35:20.389277       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 11:35:20.394014       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 11:35:20.774181       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 11:35:20.778620       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 11:35:20.778662       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc] <==
	I0729 11:34:43.716710       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0729 11:34:43.716946       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0729 11:34:43.717000       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0729 11:34:43.717146       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0729 11:34:43.718954       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0729 11:34:43.719428       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0729 11:34:43.719876       1 shared_informer.go:313] Waiting for caches to sync for job
	I0729 11:34:43.721914       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0729 11:34:43.722020       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0729 11:34:43.722125       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 11:34:43.722612       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0729 11:34:43.722781       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0729 11:34:43.722811       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0729 11:34:43.722838       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0729 11:34:43.722844       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0729 11:34:43.722872       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0729 11:34:43.722877       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0729 11:34:43.722888       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 11:34:43.722956       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 11:34:43.723019       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 11:34:43.728643       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0729 11:34:43.728676       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0729 11:34:43.728702       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0729 11:34:43.729083       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0729 11:34:43.758863       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-proxy [ce2191b91903c69dedc50e6b5526b98bc889f9f77dab55ed00a597f36b12d2d3] <==
	I0729 11:35:07.664174       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:35:08.457929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.53"]
	I0729 11:35:08.510931       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:35:08.510976       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:35:08.510992       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:35:08.515340       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:35:08.515678       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:35:08.515757       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:35:08.517035       1 config.go:192] "Starting service config controller"
	I0729 11:35:08.517124       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:35:08.517181       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:35:08.517199       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:35:08.517764       1 config.go:319] "Starting node config controller"
	I0729 11:35:08.518964       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:35:08.617237       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:35:08.617356       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:35:08.619321       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606] <==
	I0729 11:34:39.603750       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:34:41.721256       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.53"]
	I0729 11:34:41.824649       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:34:41.824732       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:34:41.824754       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:34:41.865067       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:34:41.870421       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:34:41.883666       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:34:41.908633       1 config.go:192] "Starting service config controller"
	I0729 11:34:41.908685       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:34:41.908720       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:34:41.908726       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:34:41.909348       1 config.go:319] "Starting node config controller"
	I0729 11:34:41.909387       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:34:42.009784       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:34:42.010195       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:34:42.010527       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8] <==
	I0729 11:34:39.647281       1 serving.go:380] Generated self-signed cert in-memory
	W0729 11:34:41.654307       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 11:34:41.655186       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:34:41.655257       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 11:34:41.655292       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 11:34:41.707057       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 11:34:41.707966       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:34:41.710685       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 11:34:41.711037       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 11:34:41.711108       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:34:41.711157       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 11:34:41.811710       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:34:48.883266       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 11:34:48.883429       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 11:34:48.883510       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 11:34:48.884085       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e2f6fc6d1c26b6e29b83122105a8d1674c8f96551e9f8a230513541c590a88cb] <==
	I0729 11:35:03.865705       1 serving.go:380] Generated self-signed cert in-memory
	W0729 11:35:05.435825       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 11:35:05.435917       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:35:05.435927       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 11:35:05.435934       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 11:35:05.490699       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 11:35:05.490809       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:35:05.494379       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 11:35:05.494431       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:35:05.494999       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 11:35:05.495224       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 11:35:05.595178       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 11:35:02 pause-581851 kubelet[3246]: I0729 11:35:02.032853    3246 scope.go:117] "RemoveContainer" containerID="ef298c1e5d0dc70bec09a5a1eb0d788c35ac26918067b27fc8555cbf4d85aac5"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: I0729 11:35:02.037099    3246 scope.go:117] "RemoveContainer" containerID="ee61ceda3fe5c49eaa63fd07fc0e9d095e54684ce29fe0e0242e90d35d6cf5cc"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: I0729 11:35:02.039147    3246 scope.go:117] "RemoveContainer" containerID="1758162276bc28c436721ba2b05094b2ac65e4ba66ba76bc6932dbc92c7fd5b8"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: E0729 11:35:02.155897    3246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-581851?timeout=10s\": dial tcp 192.168.50.53:8443: connect: connection refused" interval="800ms"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: I0729 11:35:02.253391    3246 kubelet_node_status.go:73] "Attempting to register node" node="pause-581851"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: E0729 11:35:02.254986    3246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.53:8443: connect: connection refused" node="pause-581851"
	Jul 29 11:35:02 pause-581851 kubelet[3246]: W0729 11:35:02.537885    3246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-581851&limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Jul 29 11:35:02 pause-581851 kubelet[3246]: E0729 11:35:02.538109    3246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-581851&limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Jul 29 11:35:02 pause-581851 kubelet[3246]: W0729 11:35:02.544315    3246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Jul 29 11:35:02 pause-581851 kubelet[3246]: E0729 11:35:02.544438    3246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Jul 29 11:35:03 pause-581851 kubelet[3246]: I0729 11:35:03.057344    3246 kubelet_node_status.go:73] "Attempting to register node" node="pause-581851"
	Jul 29 11:35:05 pause-581851 kubelet[3246]: I0729 11:35:05.628343    3246 kubelet_node_status.go:112] "Node was previously registered" node="pause-581851"
	Jul 29 11:35:05 pause-581851 kubelet[3246]: I0729 11:35:05.628808    3246 kubelet_node_status.go:76] "Successfully registered node" node="pause-581851"
	Jul 29 11:35:05 pause-581851 kubelet[3246]: I0729 11:35:05.630307    3246 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 11:35:05 pause-581851 kubelet[3246]: I0729 11:35:05.631402    3246 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 11:35:05 pause-581851 kubelet[3246]: E0729 11:35:05.784602    3246 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-581851\" already exists" pod="kube-system/kube-apiserver-pause-581851"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.524167    3246 apiserver.go:52] "Watching apiserver"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.527200    3246 topology_manager.go:215] "Topology Admit Handler" podUID="ee1d727f-0f74-4d9d-b25c-f2a885c5d965" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fmbbt"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.528779    3246 topology_manager.go:215] "Topology Admit Handler" podUID="a92ad38f-257a-4364-a81c-2b6bfcb3150c" podNamespace="kube-system" podName="kube-proxy-9c8zc"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.543283    3246 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.641050    3246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a92ad38f-257a-4364-a81c-2b6bfcb3150c-lib-modules\") pod \"kube-proxy-9c8zc\" (UID: \"a92ad38f-257a-4364-a81c-2b6bfcb3150c\") " pod="kube-system/kube-proxy-9c8zc"
	Jul 29 11:35:06 pause-581851 kubelet[3246]: I0729 11:35:06.641640    3246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a92ad38f-257a-4364-a81c-2b6bfcb3150c-xtables-lock\") pod \"kube-proxy-9c8zc\" (UID: \"a92ad38f-257a-4364-a81c-2b6bfcb3150c\") " pod="kube-system/kube-proxy-9c8zc"
	Jul 29 11:35:07 pause-581851 kubelet[3246]: I0729 11:35:07.130842    3246 scope.go:117] "RemoveContainer" containerID="269528986b69cee1276dd2b9e69feba6c06edbcb6c475eb4181be3f9fc616ca6"
	Jul 29 11:35:07 pause-581851 kubelet[3246]: I0729 11:35:07.131273    3246 scope.go:117] "RemoveContainer" containerID="fac8d87679859d23270735ea59d7a53f129fcb1996b7f3cf7454b766489c8606"
	Jul 29 11:35:11 pause-581851 kubelet[3246]: I0729 11:35:11.564089    3246 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-581851 -n pause-581851
helpers_test.go:261: (dbg) Run:  kubectl --context pause-581851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (88.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (295.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-188043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-188043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m55.275814519s)

                                                
                                                
-- stdout --
	* [old-k8s-version-188043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-188043" primary control-plane node in "old-k8s-version-188043" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:37:44.467382   62801 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:37:44.467517   62801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:44.467529   62801 out.go:304] Setting ErrFile to fd 2...
	I0729 11:37:44.467535   62801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:44.467788   62801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:37:44.468447   62801 out.go:298] Setting JSON to false
	I0729 11:37:44.469572   62801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4810,"bootTime":1722248254,"procs":356,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:37:44.469635   62801 start.go:139] virtualization: kvm guest
	I0729 11:37:44.471938   62801 out.go:177] * [old-k8s-version-188043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:37:44.473622   62801 notify.go:220] Checking for updates...
	I0729 11:37:44.473625   62801 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:37:44.475351   62801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:37:44.476888   62801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:37:44.478441   62801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:37:44.479729   62801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:37:44.481218   62801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:37:44.483196   62801 config.go:182] Loaded profile config "bridge-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:37:44.483320   62801 config.go:182] Loaded profile config "enable-default-cni-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:37:44.483425   62801 config.go:182] Loaded profile config "flannel-184479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:37:44.483536   62801 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:37:44.523035   62801 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 11:37:44.524362   62801 start.go:297] selected driver: kvm2
	I0729 11:37:44.524377   62801 start.go:901] validating driver "kvm2" against <nil>
	I0729 11:37:44.524393   62801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:37:44.525450   62801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:37:44.525555   62801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:37:44.542605   62801 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:37:44.542664   62801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:37:44.542988   62801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:37:44.543019   62801 cni.go:84] Creating CNI manager for ""
	I0729 11:37:44.543026   62801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:37:44.543039   62801 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 11:37:44.543099   62801 start.go:340] cluster config:
	{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:37:44.543189   62801 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:37:44.545078   62801 out.go:177] * Starting "old-k8s-version-188043" primary control-plane node in "old-k8s-version-188043" cluster
	I0729 11:37:44.546725   62801 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:37:44.546770   62801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 11:37:44.546781   62801 cache.go:56] Caching tarball of preloaded images
	I0729 11:37:44.546917   62801 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:37:44.546931   62801 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 11:37:44.547042   62801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:37:44.547069   62801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json: {Name:mk199d555658bbb4bbc6505ef9f8fdfe0542314e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:37:44.547234   62801 start.go:360] acquireMachinesLock for old-k8s-version-188043: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:38:06.799819   62801 start.go:364] duration metric: took 22.252547749s to acquireMachinesLock for "old-k8s-version-188043"
	I0729 11:38:06.799884   62801 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:38:06.800026   62801 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 11:38:06.802365   62801 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:38:06.802586   62801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:06.802642   62801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:06.822399   62801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0729 11:38:06.822819   62801 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:06.823500   62801 main.go:141] libmachine: Using API Version  1
	I0729 11:38:06.823544   62801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:06.823861   62801 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:06.824062   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:38:06.824225   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:38:06.824371   62801 start.go:159] libmachine.API.Create for "old-k8s-version-188043" (driver="kvm2")
	I0729 11:38:06.824408   62801 client.go:168] LocalClient.Create starting
	I0729 11:38:06.824442   62801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 11:38:06.824478   62801 main.go:141] libmachine: Decoding PEM data...
	I0729 11:38:06.824495   62801 main.go:141] libmachine: Parsing certificate...
	I0729 11:38:06.824538   62801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 11:38:06.824555   62801 main.go:141] libmachine: Decoding PEM data...
	I0729 11:38:06.824565   62801 main.go:141] libmachine: Parsing certificate...
	I0729 11:38:06.824580   62801 main.go:141] libmachine: Running pre-create checks...
	I0729 11:38:06.824588   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .PreCreateCheck
	I0729 11:38:06.824994   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetConfigRaw
	I0729 11:38:06.825390   62801 main.go:141] libmachine: Creating machine...
	I0729 11:38:06.825404   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .Create
	I0729 11:38:06.825524   62801 main.go:141] libmachine: (old-k8s-version-188043) Creating KVM machine...
	I0729 11:38:06.826890   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found existing default KVM network
	I0729 11:38:06.828116   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:06.827934   63062 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:65:1f:02} reservation:<nil>}
	I0729 11:38:06.829304   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:06.829215   63062 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:24:8b:80} reservation:<nil>}
	I0729 11:38:06.830308   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:06.830210   63062 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:5b:d5:72} reservation:<nil>}
	I0729 11:38:06.831596   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:06.831517   63062 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289a90}
	I0729 11:38:06.831646   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | created network xml: 
	I0729 11:38:06.831679   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | <network>
	I0729 11:38:06.831693   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG |   <name>mk-old-k8s-version-188043</name>
	I0729 11:38:06.831701   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG |   <dns enable='no'/>
	I0729 11:38:06.831706   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG |   
	I0729 11:38:06.831714   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0729 11:38:06.831725   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG |     <dhcp>
	I0729 11:38:06.831737   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0729 11:38:06.831746   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG |     </dhcp>
	I0729 11:38:06.831764   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG |   </ip>
	I0729 11:38:06.831775   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG |   
	I0729 11:38:06.831785   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | </network>
	I0729 11:38:06.831797   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | 
	I0729 11:38:06.837379   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | trying to create private KVM network mk-old-k8s-version-188043 192.168.72.0/24...
	I0729 11:38:06.914177   62801 main.go:141] libmachine: (old-k8s-version-188043) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043 ...
	I0729 11:38:06.914210   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | private KVM network mk-old-k8s-version-188043 192.168.72.0/24 created
	I0729 11:38:06.914224   62801 main.go:141] libmachine: (old-k8s-version-188043) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 11:38:06.914267   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:06.914093   63062 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:38:06.914322   62801 main.go:141] libmachine: (old-k8s-version-188043) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 11:38:07.157363   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:07.157250   63062 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa...
	I0729 11:38:07.250307   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:07.250179   63062 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/old-k8s-version-188043.rawdisk...
	I0729 11:38:07.250335   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Writing magic tar header
	I0729 11:38:07.250353   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Writing SSH key tar header
	I0729 11:38:07.250439   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:07.250359   63062 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043 ...
	I0729 11:38:07.250474   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043
	I0729 11:38:07.250536   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 11:38:07.250563   62801 main.go:141] libmachine: (old-k8s-version-188043) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043 (perms=drwx------)
	I0729 11:38:07.250572   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:38:07.250582   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 11:38:07.250591   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 11:38:07.250598   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Checking permissions on dir: /home/jenkins
	I0729 11:38:07.250605   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Checking permissions on dir: /home
	I0729 11:38:07.250613   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Skipping /home - not owner
	I0729 11:38:07.250627   62801 main.go:141] libmachine: (old-k8s-version-188043) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 11:38:07.250642   62801 main.go:141] libmachine: (old-k8s-version-188043) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 11:38:07.250655   62801 main.go:141] libmachine: (old-k8s-version-188043) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 11:38:07.250682   62801 main.go:141] libmachine: (old-k8s-version-188043) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 11:38:07.250738   62801 main.go:141] libmachine: (old-k8s-version-188043) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 11:38:07.250756   62801 main.go:141] libmachine: (old-k8s-version-188043) Creating domain...
	I0729 11:38:07.251878   62801 main.go:141] libmachine: (old-k8s-version-188043) define libvirt domain using xml: 
	I0729 11:38:07.251905   62801 main.go:141] libmachine: (old-k8s-version-188043) <domain type='kvm'>
	I0729 11:38:07.251917   62801 main.go:141] libmachine: (old-k8s-version-188043)   <name>old-k8s-version-188043</name>
	I0729 11:38:07.251925   62801 main.go:141] libmachine: (old-k8s-version-188043)   <memory unit='MiB'>2200</memory>
	I0729 11:38:07.251936   62801 main.go:141] libmachine: (old-k8s-version-188043)   <vcpu>2</vcpu>
	I0729 11:38:07.251946   62801 main.go:141] libmachine: (old-k8s-version-188043)   <features>
	I0729 11:38:07.251978   62801 main.go:141] libmachine: (old-k8s-version-188043)     <acpi/>
	I0729 11:38:07.252008   62801 main.go:141] libmachine: (old-k8s-version-188043)     <apic/>
	I0729 11:38:07.252022   62801 main.go:141] libmachine: (old-k8s-version-188043)     <pae/>
	I0729 11:38:07.252034   62801 main.go:141] libmachine: (old-k8s-version-188043)     
	I0729 11:38:07.252046   62801 main.go:141] libmachine: (old-k8s-version-188043)   </features>
	I0729 11:38:07.252056   62801 main.go:141] libmachine: (old-k8s-version-188043)   <cpu mode='host-passthrough'>
	I0729 11:38:07.252068   62801 main.go:141] libmachine: (old-k8s-version-188043)   
	I0729 11:38:07.252078   62801 main.go:141] libmachine: (old-k8s-version-188043)   </cpu>
	I0729 11:38:07.252086   62801 main.go:141] libmachine: (old-k8s-version-188043)   <os>
	I0729 11:38:07.252096   62801 main.go:141] libmachine: (old-k8s-version-188043)     <type>hvm</type>
	I0729 11:38:07.252125   62801 main.go:141] libmachine: (old-k8s-version-188043)     <boot dev='cdrom'/>
	I0729 11:38:07.252147   62801 main.go:141] libmachine: (old-k8s-version-188043)     <boot dev='hd'/>
	I0729 11:38:07.252163   62801 main.go:141] libmachine: (old-k8s-version-188043)     <bootmenu enable='no'/>
	I0729 11:38:07.252174   62801 main.go:141] libmachine: (old-k8s-version-188043)   </os>
	I0729 11:38:07.252193   62801 main.go:141] libmachine: (old-k8s-version-188043)   <devices>
	I0729 11:38:07.252205   62801 main.go:141] libmachine: (old-k8s-version-188043)     <disk type='file' device='cdrom'>
	I0729 11:38:07.252222   62801 main.go:141] libmachine: (old-k8s-version-188043)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/boot2docker.iso'/>
	I0729 11:38:07.252237   62801 main.go:141] libmachine: (old-k8s-version-188043)       <target dev='hdc' bus='scsi'/>
	I0729 11:38:07.252249   62801 main.go:141] libmachine: (old-k8s-version-188043)       <readonly/>
	I0729 11:38:07.252259   62801 main.go:141] libmachine: (old-k8s-version-188043)     </disk>
	I0729 11:38:07.252272   62801 main.go:141] libmachine: (old-k8s-version-188043)     <disk type='file' device='disk'>
	I0729 11:38:07.252284   62801 main.go:141] libmachine: (old-k8s-version-188043)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 11:38:07.252300   62801 main.go:141] libmachine: (old-k8s-version-188043)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/old-k8s-version-188043.rawdisk'/>
	I0729 11:38:07.252315   62801 main.go:141] libmachine: (old-k8s-version-188043)       <target dev='hda' bus='virtio'/>
	I0729 11:38:07.252326   62801 main.go:141] libmachine: (old-k8s-version-188043)     </disk>
	I0729 11:38:07.252337   62801 main.go:141] libmachine: (old-k8s-version-188043)     <interface type='network'>
	I0729 11:38:07.252351   62801 main.go:141] libmachine: (old-k8s-version-188043)       <source network='mk-old-k8s-version-188043'/>
	I0729 11:38:07.252361   62801 main.go:141] libmachine: (old-k8s-version-188043)       <model type='virtio'/>
	I0729 11:38:07.252374   62801 main.go:141] libmachine: (old-k8s-version-188043)     </interface>
	I0729 11:38:07.252388   62801 main.go:141] libmachine: (old-k8s-version-188043)     <interface type='network'>
	I0729 11:38:07.252401   62801 main.go:141] libmachine: (old-k8s-version-188043)       <source network='default'/>
	I0729 11:38:07.252411   62801 main.go:141] libmachine: (old-k8s-version-188043)       <model type='virtio'/>
	I0729 11:38:07.252423   62801 main.go:141] libmachine: (old-k8s-version-188043)     </interface>
	I0729 11:38:07.252432   62801 main.go:141] libmachine: (old-k8s-version-188043)     <serial type='pty'>
	I0729 11:38:07.252443   62801 main.go:141] libmachine: (old-k8s-version-188043)       <target port='0'/>
	I0729 11:38:07.252452   62801 main.go:141] libmachine: (old-k8s-version-188043)     </serial>
	I0729 11:38:07.252463   62801 main.go:141] libmachine: (old-k8s-version-188043)     <console type='pty'>
	I0729 11:38:07.252474   62801 main.go:141] libmachine: (old-k8s-version-188043)       <target type='serial' port='0'/>
	I0729 11:38:07.252484   62801 main.go:141] libmachine: (old-k8s-version-188043)     </console>
	I0729 11:38:07.252507   62801 main.go:141] libmachine: (old-k8s-version-188043)     <rng model='virtio'>
	I0729 11:38:07.252536   62801 main.go:141] libmachine: (old-k8s-version-188043)       <backend model='random'>/dev/random</backend>
	I0729 11:38:07.252554   62801 main.go:141] libmachine: (old-k8s-version-188043)     </rng>
	I0729 11:38:07.252566   62801 main.go:141] libmachine: (old-k8s-version-188043)     
	I0729 11:38:07.252576   62801 main.go:141] libmachine: (old-k8s-version-188043)     
	I0729 11:38:07.252586   62801 main.go:141] libmachine: (old-k8s-version-188043)   </devices>
	I0729 11:38:07.252590   62801 main.go:141] libmachine: (old-k8s-version-188043) </domain>
	I0729 11:38:07.252619   62801 main.go:141] libmachine: (old-k8s-version-188043) 
	I0729 11:38:07.257031   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:03:0e:bc in network default
	I0729 11:38:07.257725   62801 main.go:141] libmachine: (old-k8s-version-188043) Ensuring networks are active...
	I0729 11:38:07.257755   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:07.258540   62801 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network default is active
	I0729 11:38:07.258936   62801 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network mk-old-k8s-version-188043 is active
	I0729 11:38:07.259642   62801 main.go:141] libmachine: (old-k8s-version-188043) Getting domain xml...
	I0729 11:38:07.260681   62801 main.go:141] libmachine: (old-k8s-version-188043) Creating domain...
	I0729 11:38:08.695775   62801 main.go:141] libmachine: (old-k8s-version-188043) Waiting to get IP...
	I0729 11:38:08.696485   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:08.696999   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:08.697025   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:08.696970   63062 retry.go:31] will retry after 300.784515ms: waiting for machine to come up
	I0729 11:38:08.999840   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:09.000378   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:09.000410   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:09.000335   63062 retry.go:31] will retry after 287.835559ms: waiting for machine to come up
	I0729 11:38:09.289767   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:09.290680   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:09.290728   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:09.290655   63062 retry.go:31] will retry after 482.787423ms: waiting for machine to come up
	I0729 11:38:09.775249   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:09.775838   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:09.775881   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:09.775779   63062 retry.go:31] will retry after 528.438047ms: waiting for machine to come up
	I0729 11:38:10.305686   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:10.306362   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:10.306399   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:10.306278   63062 retry.go:31] will retry after 526.52016ms: waiting for machine to come up
	I0729 11:38:10.834641   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:10.835139   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:10.835187   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:10.835098   63062 retry.go:31] will retry after 704.754339ms: waiting for machine to come up
	I0729 11:38:11.541194   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:11.541778   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:11.541800   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:11.541720   63062 retry.go:31] will retry after 767.808693ms: waiting for machine to come up
	I0729 11:38:12.310906   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:12.311467   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:12.311497   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:12.311422   63062 retry.go:31] will retry after 1.238280592s: waiting for machine to come up
	I0729 11:38:13.551994   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:13.552549   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:13.552571   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:13.552512   63062 retry.go:31] will retry after 1.282272251s: waiting for machine to come up
	I0729 11:38:14.835953   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:14.836543   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:14.836572   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:14.836497   63062 retry.go:31] will retry after 1.806982876s: waiting for machine to come up
	I0729 11:38:16.644737   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:16.645173   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:16.645197   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:16.645145   63062 retry.go:31] will retry after 2.168367856s: waiting for machine to come up
	I0729 11:38:18.816051   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:18.816727   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:18.816758   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:18.816667   63062 retry.go:31] will retry after 2.379927043s: waiting for machine to come up
	I0729 11:38:21.199255   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:21.199834   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:21.199871   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:21.199781   63062 retry.go:31] will retry after 4.452327139s: waiting for machine to come up
	I0729 11:38:25.656188   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:25.656716   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:38:25.656750   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:38:25.656706   63062 retry.go:31] will retry after 4.963978904s: waiting for machine to come up
	I0729 11:38:30.623711   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:30.624261   62801 main.go:141] libmachine: (old-k8s-version-188043) Found IP for machine: 192.168.72.61
	I0729 11:38:30.624285   62801 main.go:141] libmachine: (old-k8s-version-188043) Reserving static IP address...
	I0729 11:38:30.624298   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has current primary IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:30.624679   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"} in network mk-old-k8s-version-188043
	I0729 11:38:30.708435   62801 main.go:141] libmachine: (old-k8s-version-188043) Reserved static IP address: 192.168.72.61
	I0729 11:38:30.708467   62801 main.go:141] libmachine: (old-k8s-version-188043) Waiting for SSH to be available...
	I0729 11:38:30.708478   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Getting to WaitForSSH function...
	I0729 11:38:30.711385   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:30.711922   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:30.711949   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:30.712132   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH client type: external
	I0729 11:38:30.712159   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa (-rw-------)
	I0729 11:38:30.712189   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:38:30.712203   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | About to run SSH command:
	I0729 11:38:30.712252   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | exit 0
	I0729 11:38:30.839110   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | SSH cmd err, output: <nil>: 
	I0729 11:38:30.839396   62801 main.go:141] libmachine: (old-k8s-version-188043) KVM machine creation complete!
	I0729 11:38:30.839835   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetConfigRaw
	I0729 11:38:30.840473   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:38:30.840672   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:38:30.840827   62801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:38:30.840839   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetState
	I0729 11:38:30.842126   62801 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:38:30.842148   62801 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:38:30.842156   62801 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:38:30.842168   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:30.844596   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:30.845021   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:30.845051   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:30.845241   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:38:30.845403   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:30.845551   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:30.845713   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:38:30.845883   62801 main.go:141] libmachine: Using SSH client type: native
	I0729 11:38:30.846075   62801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:38:30.846085   62801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:38:30.954093   62801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:38:30.954119   62801 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:38:30.954129   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:30.957031   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:30.957416   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:30.957439   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:30.957651   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:38:30.957854   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:30.958032   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:30.958197   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:38:30.958360   62801 main.go:141] libmachine: Using SSH client type: native
	I0729 11:38:30.958555   62801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:38:30.958567   62801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:38:31.063534   62801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:38:31.063605   62801 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:38:31.063618   62801 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:38:31.063632   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:38:31.063960   62801 buildroot.go:166] provisioning hostname "old-k8s-version-188043"
	I0729 11:38:31.063991   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:38:31.064178   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:31.066857   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.067241   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:31.067269   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.067466   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:38:31.067646   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:31.067784   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:31.067996   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:38:31.068172   62801 main.go:141] libmachine: Using SSH client type: native
	I0729 11:38:31.068345   62801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:38:31.068356   62801 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-188043 && echo "old-k8s-version-188043" | sudo tee /etc/hostname
	I0729 11:38:31.186358   62801 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-188043
	
	I0729 11:38:31.186390   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:31.189318   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.189596   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:31.189636   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.189773   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:38:31.189970   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:31.190115   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:31.190318   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:38:31.190524   62801 main.go:141] libmachine: Using SSH client type: native
	I0729 11:38:31.190777   62801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:38:31.190797   62801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-188043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-188043/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-188043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:38:31.308524   62801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:38:31.308557   62801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:38:31.308613   62801 buildroot.go:174] setting up certificates
	I0729 11:38:31.308629   62801 provision.go:84] configureAuth start
	I0729 11:38:31.308647   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:38:31.308933   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:38:31.311916   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.312275   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:31.312295   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.312435   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:31.314922   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.315341   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:31.315370   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.315464   62801 provision.go:143] copyHostCerts
	I0729 11:38:31.315530   62801 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:38:31.315542   62801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:38:31.315606   62801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:38:31.315829   62801 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:38:31.315841   62801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:38:31.315869   62801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:38:31.315960   62801 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:38:31.315969   62801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:38:31.315996   62801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:38:31.316074   62801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-188043 san=[127.0.0.1 192.168.72.61 localhost minikube old-k8s-version-188043]
	I0729 11:38:31.458968   62801 provision.go:177] copyRemoteCerts
	I0729 11:38:31.459032   62801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:38:31.459059   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:31.461856   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.462237   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:31.462258   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.462489   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:38:31.462670   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:31.462869   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:38:31.463061   62801 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:38:31.549741   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:38:31.577710   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 11:38:31.604467   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:38:31.629441   62801 provision.go:87] duration metric: took 320.79709ms to configureAuth
	I0729 11:38:31.629474   62801 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:38:31.629679   62801 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:38:31.629785   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:31.632961   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.633383   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:31.633415   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.633590   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:38:31.633793   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:31.633994   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:31.634197   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:38:31.634360   62801 main.go:141] libmachine: Using SSH client type: native
	I0729 11:38:31.634586   62801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:38:31.634612   62801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:38:31.912134   62801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:38:31.912176   62801 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:38:31.912188   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetURL
	I0729 11:38:31.913789   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using libvirt version 6000000
	I0729 11:38:31.915968   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.916357   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:31.916400   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.916571   62801 main.go:141] libmachine: Docker is up and running!
	I0729 11:38:31.916590   62801 main.go:141] libmachine: Reticulating splines...
	I0729 11:38:31.916598   62801 client.go:171] duration metric: took 25.092179513s to LocalClient.Create
	I0729 11:38:31.916625   62801 start.go:167] duration metric: took 25.092254609s to libmachine.API.Create "old-k8s-version-188043"
	I0729 11:38:31.916636   62801 start.go:293] postStartSetup for "old-k8s-version-188043" (driver="kvm2")
	I0729 11:38:31.916650   62801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:38:31.916673   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:38:31.917033   62801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:38:31.917057   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:31.919178   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.919502   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:31.919520   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:31.919636   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:38:31.919873   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:31.920064   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:38:31.920280   62801 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:38:32.001884   62801 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:38:32.006943   62801 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:38:32.006971   62801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:38:32.007030   62801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:38:32.007129   62801 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:38:32.007250   62801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:38:32.017978   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:38:32.043733   62801 start.go:296] duration metric: took 127.084726ms for postStartSetup
	I0729 11:38:32.043801   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetConfigRaw
	I0729 11:38:32.044419   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:38:32.047126   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:32.047569   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:32.047592   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:32.047835   62801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:38:32.048103   62801 start.go:128] duration metric: took 25.248061019s to createHost
	I0729 11:38:32.048133   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:32.050460   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:32.050807   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:32.050837   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:32.051005   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:38:32.051189   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:32.051357   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:32.051539   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:38:32.051719   62801 main.go:141] libmachine: Using SSH client type: native
	I0729 11:38:32.051905   62801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:38:32.051920   62801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 11:38:32.155793   62801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253112.126415584
	
	I0729 11:38:32.155823   62801 fix.go:216] guest clock: 1722253112.126415584
	I0729 11:38:32.155832   62801 fix.go:229] Guest: 2024-07-29 11:38:32.126415584 +0000 UTC Remote: 2024-07-29 11:38:32.048119938 +0000 UTC m=+47.616274364 (delta=78.295646ms)
	I0729 11:38:32.155868   62801 fix.go:200] guest clock delta is within tolerance: 78.295646ms
	I0729 11:38:32.155872   62801 start.go:83] releasing machines lock for "old-k8s-version-188043", held for 25.356022984s
	I0729 11:38:32.155906   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:38:32.156233   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:38:32.158961   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:32.159384   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:32.159412   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:32.159596   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:38:32.160258   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:38:32.160436   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:38:32.160538   62801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:38:32.160574   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:32.160670   62801 ssh_runner.go:195] Run: cat /version.json
	I0729 11:38:32.160695   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:38:32.163658   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:32.163755   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:32.164008   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:32.164036   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:32.164193   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:32.164223   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:32.164347   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:38:32.164559   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:32.164583   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:38:32.164768   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:38:32.164784   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:38:32.164965   62801 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:38:32.164989   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:38:32.165130   62801 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:38:32.269751   62801 ssh_runner.go:195] Run: systemctl --version
	I0729 11:38:32.278551   62801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:38:32.441148   62801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:38:32.447850   62801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:38:32.447930   62801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:38:32.467385   62801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:38:32.467414   62801 start.go:495] detecting cgroup driver to use...
	I0729 11:38:32.467493   62801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:38:32.499891   62801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:38:32.517433   62801 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:38:32.517493   62801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:38:32.534825   62801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:38:32.550772   62801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:38:32.695171   62801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:38:32.866406   62801 docker.go:233] disabling docker service ...
	I0729 11:38:32.866470   62801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:38:32.882446   62801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:38:32.896705   62801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:38:33.042067   62801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:38:33.173892   62801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:38:33.190130   62801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:38:33.211672   62801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 11:38:33.211745   62801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:38:33.225297   62801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:38:33.225376   62801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:38:33.237265   62801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:38:33.252282   62801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:38:33.265603   62801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:38:33.281760   62801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:38:33.293872   62801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:38:33.293943   62801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:38:33.313005   62801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:38:33.326904   62801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:38:33.464345   62801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:38:33.621426   62801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:38:33.621490   62801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:38:33.626889   62801 start.go:563] Will wait 60s for crictl version
	I0729 11:38:33.626953   62801 ssh_runner.go:195] Run: which crictl
	I0729 11:38:33.631456   62801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:38:33.682100   62801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:38:33.682204   62801 ssh_runner.go:195] Run: crio --version
	I0729 11:38:33.712961   62801 ssh_runner.go:195] Run: crio --version
	I0729 11:38:33.749862   62801 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 11:38:33.751183   62801 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:38:33.754245   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:33.754828   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:38:23 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:38:33.754861   62801 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:38:33.755116   62801 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 11:38:33.760770   62801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:38:33.779558   62801 kubeadm.go:883] updating cluster {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:38:33.779663   62801 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:38:33.779714   62801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:38:33.834772   62801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:38:33.834863   62801 ssh_runner.go:195] Run: which lz4
	I0729 11:38:33.840621   62801 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 11:38:33.845477   62801 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:38:33.845514   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 11:38:35.622975   62801 crio.go:462] duration metric: took 1.782385507s to copy over tarball
	I0729 11:38:35.623066   62801 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:38:38.612974   62801 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989870393s)
	I0729 11:38:38.613007   62801 crio.go:469] duration metric: took 2.989997798s to extract the tarball
	I0729 11:38:38.613022   62801 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:38:38.662751   62801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:38:38.713512   62801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:38:38.713539   62801 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:38:38.713621   62801 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:38:38.713665   62801 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:38:38.713708   62801 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:38:38.713907   62801 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 11:38:38.713927   62801 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:38:38.714022   62801 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:38:38.713907   62801 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:38:38.714152   62801 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 11:38:38.715088   62801 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:38:38.715257   62801 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:38:38.715451   62801 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:38:38.715587   62801 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:38:38.715854   62801 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 11:38:38.716019   62801 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:38:38.716118   62801 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 11:38:38.716190   62801 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:38:38.863001   62801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:38:38.864993   62801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:38:38.867121   62801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:38:38.872402   62801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 11:38:38.895856   62801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:38:38.904143   62801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 11:38:38.948374   62801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 11:38:38.985422   62801 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 11:38:38.985487   62801 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:38:38.985546   62801 ssh_runner.go:195] Run: which crictl
	I0729 11:38:39.027836   62801 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 11:38:39.027892   62801 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:38:39.027953   62801 ssh_runner.go:195] Run: which crictl
	I0729 11:38:39.044195   62801 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 11:38:39.044246   62801 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 11:38:39.044272   62801 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:38:39.044343   62801 ssh_runner.go:195] Run: which crictl
	I0729 11:38:39.044279   62801 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 11:38:39.044418   62801 ssh_runner.go:195] Run: which crictl
	I0729 11:38:39.070580   62801 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 11:38:39.070635   62801 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:38:39.070690   62801 ssh_runner.go:195] Run: which crictl
	I0729 11:38:39.088806   62801 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 11:38:39.088868   62801 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 11:38:39.088913   62801 ssh_runner.go:195] Run: which crictl
	I0729 11:38:39.094359   62801 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 11:38:39.094413   62801 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:38:39.094451   62801 ssh_runner.go:195] Run: which crictl
	I0729 11:38:39.094467   62801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:38:39.094479   62801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:38:39.094479   62801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:38:39.094525   62801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 11:38:39.094538   62801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:38:39.094588   62801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 11:38:39.105598   62801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 11:38:39.283728   62801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 11:38:39.283778   62801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 11:38:39.283793   62801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 11:38:39.283801   62801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 11:38:39.283875   62801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 11:38:39.283910   62801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 11:38:39.284477   62801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 11:38:39.669015   62801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:38:39.811433   62801 cache_images.go:92] duration metric: took 1.09786514s to LoadCachedImages
	W0729 11:38:39.811603   62801 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 11:38:39.811728   62801 kubeadm.go:934] updating node { 192.168.72.61 8443 v1.20.0 crio true true} ...
	I0729 11:38:39.811854   62801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-188043 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:38:39.811942   62801 ssh_runner.go:195] Run: crio config
	I0729 11:38:39.877158   62801 cni.go:84] Creating CNI manager for ""
	I0729 11:38:39.877177   62801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:38:39.877185   62801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:38:39.877201   62801 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.61 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-188043 NodeName:old-k8s-version-188043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 11:38:39.877308   62801 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-188043"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:38:39.877367   62801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 11:38:39.887739   62801 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:38:39.887815   62801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:38:39.897332   62801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 11:38:39.916806   62801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:38:39.935282   62801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 11:38:39.954106   62801 ssh_runner.go:195] Run: grep 192.168.72.61	control-plane.minikube.internal$ /etc/hosts
	I0729 11:38:39.958460   62801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:38:39.972124   62801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:38:40.112271   62801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:38:40.131176   62801 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043 for IP: 192.168.72.61
	I0729 11:38:40.131199   62801 certs.go:194] generating shared ca certs ...
	I0729 11:38:40.131218   62801 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:38:40.131397   62801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:38:40.131473   62801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:38:40.131488   62801 certs.go:256] generating profile certs ...
	I0729 11:38:40.131554   62801 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.key
	I0729 11:38:40.131572   62801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.crt with IP's: []
	I0729 11:38:40.301365   62801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.crt ...
	I0729 11:38:40.301397   62801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.crt: {Name:mk97f4b73d42e9611a67a87d9328e8b179138777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:38:40.301617   62801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.key ...
	I0729 11:38:40.301638   62801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.key: {Name:mk4fe023242dc9807d48d20040f2be3e04500a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:38:40.301750   62801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4
	I0729 11:38:40.301770   62801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt.2bbdfef4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.61]
	I0729 11:38:40.505027   62801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt.2bbdfef4 ...
	I0729 11:38:40.505060   62801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt.2bbdfef4: {Name:mk3e5029e48b4142ca93531d63f37864bf3a73fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:38:40.505237   62801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4 ...
	I0729 11:38:40.505259   62801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4: {Name:mk020a3079911987752b4c51af3a7526c507f691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:38:40.505371   62801 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt.2bbdfef4 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt
	I0729 11:38:40.505480   62801 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key
	I0729 11:38:40.505565   62801 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key
	I0729 11:38:40.505591   62801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt with IP's: []
	I0729 11:38:40.953444   62801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt ...
	I0729 11:38:40.953478   62801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt: {Name:mk0f5e5d82bc59dee6182447414aed0a12ec8401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:38:40.953673   62801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key ...
	I0729 11:38:40.953691   62801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key: {Name:mk900d3d556892c3571dd18ce874d3fb1b09e543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:38:40.953906   62801 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:38:40.953956   62801 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:38:40.953970   62801 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:38:40.954005   62801 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:38:40.954036   62801 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:38:40.954068   62801 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:38:40.954122   62801 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:38:40.954755   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:38:40.990066   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:38:41.035429   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:38:41.073392   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:38:41.111643   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 11:38:41.141364   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:38:41.167999   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:38:41.199324   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:38:41.228344   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:38:41.261012   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:38:41.290256   62801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:38:41.319526   62801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:38:41.343711   62801 ssh_runner.go:195] Run: openssl version
	I0729 11:38:41.352398   62801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:38:41.369311   62801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:38:41.375879   62801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:38:41.375952   62801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:38:41.384900   62801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:38:41.400813   62801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:38:41.417332   62801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:38:41.423694   62801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:38:41.423774   62801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:38:41.431871   62801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:38:41.448281   62801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:38:41.464927   62801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:38:41.471062   62801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:38:41.471140   62801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:38:41.479180   62801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:38:41.495258   62801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:38:41.500179   62801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:38:41.500244   62801 kubeadm.go:392] StartCluster: {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:38:41.500338   62801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:38:41.500406   62801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:38:41.547600   62801 cri.go:89] found id: ""
	I0729 11:38:41.547677   62801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:38:41.560224   62801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:38:41.572733   62801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:38:41.584046   62801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:38:41.584083   62801 kubeadm.go:157] found existing configuration files:
	
	I0729 11:38:41.584151   62801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:38:41.596412   62801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:38:41.596479   62801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:38:41.609214   62801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:38:41.620929   62801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:38:41.621016   62801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:38:41.633167   62801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:38:41.646691   62801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:38:41.646951   62801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:38:41.661194   62801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:38:41.673948   62801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:38:41.674080   62801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:38:41.684984   62801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:38:41.870568   62801 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:38:41.870653   62801 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:38:42.038435   62801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:38:42.038577   62801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:38:42.038732   62801 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:38:42.288804   62801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:38:42.292240   62801 out.go:204]   - Generating certificates and keys ...
	I0729 11:38:42.292676   62801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:38:42.292811   62801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:38:42.364642   62801 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 11:38:42.757452   62801 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 11:38:42.873114   62801 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 11:38:43.228217   62801 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 11:38:43.378899   62801 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 11:38:43.379779   62801 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-188043] and IPs [192.168.72.61 127.0.0.1 ::1]
	I0729 11:38:43.473850   62801 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 11:38:43.474223   62801 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-188043] and IPs [192.168.72.61 127.0.0.1 ::1]
	I0729 11:38:43.672677   62801 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 11:38:44.093865   62801 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 11:38:44.623738   62801 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 11:38:44.624037   62801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:38:44.781780   62801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:38:45.047826   62801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:38:45.139020   62801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:38:45.252432   62801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:38:45.296723   62801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:38:45.296853   62801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:38:45.296914   62801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:38:45.501446   62801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:38:45.503452   62801 out.go:204]   - Booting up control plane ...
	I0729 11:38:45.503588   62801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:38:45.514772   62801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:38:45.517362   62801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:38:45.518285   62801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:38:45.532323   62801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:39:25.525221   62801 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:39:25.538447   62801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:39:25.538686   62801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:39:30.538716   62801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:39:30.539007   62801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:39:40.538576   62801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:39:40.538845   62801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:40:00.538477   62801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:40:00.538683   62801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:40:40.540039   62801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:40:40.540260   62801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:40:40.540272   62801 kubeadm.go:310] 
	I0729 11:40:40.540305   62801 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:40:40.540340   62801 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:40:40.540346   62801 kubeadm.go:310] 
	I0729 11:40:40.540378   62801 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:40:40.540426   62801 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:40:40.540579   62801 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:40:40.540605   62801 kubeadm.go:310] 
	I0729 11:40:40.540745   62801 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:40:40.540786   62801 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:40:40.540832   62801 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:40:40.540844   62801 kubeadm.go:310] 
	I0729 11:40:40.541006   62801 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:40:40.541092   62801 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:40:40.541100   62801 kubeadm.go:310] 
	I0729 11:40:40.541194   62801 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:40:40.541282   62801 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:40:40.541360   62801 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:40:40.541431   62801 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:40:40.541443   62801 kubeadm.go:310] 
	I0729 11:40:40.542050   62801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:40:40.542198   62801 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:40:40.542273   62801 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 11:40:40.542391   62801 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-188043] and IPs [192.168.72.61 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-188043] and IPs [192.168.72.61 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-188043] and IPs [192.168.72.61 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-188043] and IPs [192.168.72.61 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 11:40:40.542443   62801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:40:42.517886   62801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.975411697s)
	I0729 11:40:42.517979   62801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:40:42.535377   62801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:40:42.545535   62801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:40:42.545558   62801 kubeadm.go:157] found existing configuration files:
	
	I0729 11:40:42.545600   62801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:40:42.555001   62801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:40:42.555060   62801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:40:42.564904   62801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:40:42.574522   62801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:40:42.574633   62801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:40:42.586210   62801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:40:42.596972   62801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:40:42.597045   62801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:40:42.607126   62801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:40:42.617261   62801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:40:42.617328   62801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:40:42.627820   62801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:40:42.708741   62801 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:40:42.708826   62801 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:40:42.880490   62801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:40:42.880629   62801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:40:42.880751   62801 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:40:43.085917   62801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:40:43.088974   62801 out.go:204]   - Generating certificates and keys ...
	I0729 11:40:43.089103   62801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:40:43.089195   62801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:40:43.089301   62801 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:40:43.089381   62801 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:40:43.089465   62801 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:40:43.089528   62801 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:40:43.089609   62801 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:40:43.089681   62801 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:40:43.090212   62801 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:40:43.090624   62801 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:40:43.090843   62801 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:40:43.090930   62801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:40:43.282427   62801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:40:43.639332   62801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:40:43.811506   62801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:40:43.885780   62801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:40:43.902900   62801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:40:43.903996   62801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:40:43.904044   62801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:40:44.063352   62801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:40:44.065384   62801 out.go:204]   - Booting up control plane ...
	I0729 11:40:44.065516   62801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:40:44.072602   62801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:40:44.072711   62801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:40:44.073464   62801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:40:44.075810   62801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:41:24.078382   62801 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:41:24.078765   62801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:41:24.079032   62801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:41:29.079540   62801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:41:29.079765   62801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:41:39.080241   62801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:41:39.080502   62801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:41:59.079481   62801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:41:59.079731   62801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:42:39.079416   62801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:42:39.079618   62801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:42:39.079628   62801 kubeadm.go:310] 
	I0729 11:42:39.079668   62801 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:42:39.079713   62801 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:42:39.079723   62801 kubeadm.go:310] 
	I0729 11:42:39.079757   62801 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:42:39.079792   62801 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:42:39.079939   62801 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:42:39.079953   62801 kubeadm.go:310] 
	I0729 11:42:39.080101   62801 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:42:39.080136   62801 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:42:39.080181   62801 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:42:39.080191   62801 kubeadm.go:310] 
	I0729 11:42:39.080366   62801 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:42:39.080482   62801 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:42:39.080498   62801 kubeadm.go:310] 
	I0729 11:42:39.080613   62801 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:42:39.080708   62801 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:42:39.080802   62801 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:42:39.080932   62801 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:42:39.080966   62801 kubeadm.go:310] 
	I0729 11:42:39.081863   62801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:42:39.081965   62801 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:42:39.082053   62801 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 11:42:39.082116   62801 kubeadm.go:394] duration metric: took 3m57.581877975s to StartCluster
	I0729 11:42:39.082188   62801 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:42:39.082248   62801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:42:39.124552   62801 cri.go:89] found id: ""
	I0729 11:42:39.124584   62801 logs.go:276] 0 containers: []
	W0729 11:42:39.124594   62801 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:42:39.124600   62801 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:42:39.124679   62801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:42:39.160466   62801 cri.go:89] found id: ""
	I0729 11:42:39.160494   62801 logs.go:276] 0 containers: []
	W0729 11:42:39.160502   62801 logs.go:278] No container was found matching "etcd"
	I0729 11:42:39.160516   62801 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:42:39.160589   62801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:42:39.195258   62801 cri.go:89] found id: ""
	I0729 11:42:39.195289   62801 logs.go:276] 0 containers: []
	W0729 11:42:39.195301   62801 logs.go:278] No container was found matching "coredns"
	I0729 11:42:39.195309   62801 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:42:39.195375   62801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:42:39.231951   62801 cri.go:89] found id: ""
	I0729 11:42:39.231978   62801 logs.go:276] 0 containers: []
	W0729 11:42:39.231985   62801 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:42:39.231990   62801 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:42:39.232048   62801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:42:39.266854   62801 cri.go:89] found id: ""
	I0729 11:42:39.266877   62801 logs.go:276] 0 containers: []
	W0729 11:42:39.266885   62801 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:42:39.266890   62801 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:42:39.266950   62801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:42:39.305583   62801 cri.go:89] found id: ""
	I0729 11:42:39.305610   62801 logs.go:276] 0 containers: []
	W0729 11:42:39.305618   62801 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:42:39.305624   62801 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:42:39.305685   62801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:42:39.340805   62801 cri.go:89] found id: ""
	I0729 11:42:39.340833   62801 logs.go:276] 0 containers: []
	W0729 11:42:39.340842   62801 logs.go:278] No container was found matching "kindnet"
	I0729 11:42:39.340851   62801 logs.go:123] Gathering logs for container status ...
	I0729 11:42:39.340862   62801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:42:39.395029   62801 logs.go:123] Gathering logs for kubelet ...
	I0729 11:42:39.395060   62801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:42:39.451495   62801 logs.go:123] Gathering logs for dmesg ...
	I0729 11:42:39.451533   62801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:42:39.471788   62801 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:42:39.471817   62801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:42:39.593021   62801 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:42:39.593050   62801 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:42:39.593063   62801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 11:42:39.692335   62801 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 11:42:39.692385   62801 out.go:239] * 
	* 
	W0729 11:42:39.692453   62801 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:42:39.692482   62801 out.go:239] * 
	* 
	W0729 11:42:39.693322   62801 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:42:39.696297   62801 out.go:177] 
	W0729 11:42:39.697413   62801 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:42:39.697459   62801 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 11:42:39.697487   62801 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 11:42:39.699065   62801 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-188043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 6 (222.196014ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:42:39.968757   69510 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-188043" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (295.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-297799 --alsologtostderr -v=3
E0729 11:39:57.915925   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 11:40:36.673442   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:40:36.678768   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:40:36.689032   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:40:36.709346   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:40:36.749644   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:40:36.830248   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:40:36.990396   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-297799 --alsologtostderr -v=3: exit status 82 (2m0.526069569s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-297799"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:39:52.891241   68414 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:39:52.891516   68414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:39:52.891529   68414 out.go:304] Setting ErrFile to fd 2...
	I0729 11:39:52.891536   68414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:39:52.891741   68414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:39:52.892064   68414 out.go:298] Setting JSON to false
	I0729 11:39:52.892179   68414 mustload.go:65] Loading cluster: no-preload-297799
	I0729 11:39:52.892514   68414 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:39:52.892596   68414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/config.json ...
	I0729 11:39:52.892758   68414 mustload.go:65] Loading cluster: no-preload-297799
	I0729 11:39:52.892866   68414 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:39:52.892892   68414 stop.go:39] StopHost: no-preload-297799
	I0729 11:39:52.893275   68414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:39:52.893324   68414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:39:52.908467   68414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35129
	I0729 11:39:52.909064   68414 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:39:52.909678   68414 main.go:141] libmachine: Using API Version  1
	I0729 11:39:52.909694   68414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:39:52.910073   68414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:39:52.912929   68414 out.go:177] * Stopping node "no-preload-297799"  ...
	I0729 11:39:52.914531   68414 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 11:39:52.914584   68414 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:39:52.914846   68414 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 11:39:52.914877   68414 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:39:52.918309   68414 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:39:52.918716   68414 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:39:52.918747   68414 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:39:52.918963   68414 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:39:52.919148   68414 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:39:52.919365   68414 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:39:52.919545   68414 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:39:53.029628   68414 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 11:39:53.094646   68414 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 11:39:53.159805   68414 main.go:141] libmachine: Stopping "no-preload-297799"...
	I0729 11:39:53.159848   68414 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:39:53.162076   68414 main.go:141] libmachine: (no-preload-297799) Calling .Stop
	I0729 11:39:53.166459   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 0/120
	I0729 11:39:54.168048   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 1/120
	I0729 11:39:55.169841   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 2/120
	I0729 11:39:56.171469   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 3/120
	I0729 11:39:57.173217   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 4/120
	I0729 11:39:58.175490   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 5/120
	I0729 11:39:59.177157   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 6/120
	I0729 11:40:00.178540   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 7/120
	I0729 11:40:01.180470   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 8/120
	I0729 11:40:02.181974   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 9/120
	I0729 11:40:03.183857   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 10/120
	I0729 11:40:04.185756   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 11/120
	I0729 11:40:05.187345   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 12/120
	I0729 11:40:06.189396   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 13/120
	I0729 11:40:07.191191   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 14/120
	I0729 11:40:08.193246   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 15/120
	I0729 11:40:09.195079   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 16/120
	I0729 11:40:10.196515   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 17/120
	I0729 11:40:11.198063   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 18/120
	I0729 11:40:12.199382   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 19/120
	I0729 11:40:13.201352   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 20/120
	I0729 11:40:14.202817   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 21/120
	I0729 11:40:15.204261   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 22/120
	I0729 11:40:16.206494   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 23/120
	I0729 11:40:17.208087   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 24/120
	I0729 11:40:18.210273   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 25/120
	I0729 11:40:19.212059   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 26/120
	I0729 11:40:20.213277   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 27/120
	I0729 11:40:21.214727   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 28/120
	I0729 11:40:22.215965   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 29/120
	I0729 11:40:23.217170   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 30/120
	I0729 11:40:24.218661   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 31/120
	I0729 11:40:25.219884   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 32/120
	I0729 11:40:26.221410   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 33/120
	I0729 11:40:27.223207   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 34/120
	I0729 11:40:28.225457   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 35/120
	I0729 11:40:29.227479   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 36/120
	I0729 11:40:30.229401   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 37/120
	I0729 11:40:31.230956   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 38/120
	I0729 11:40:32.233322   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 39/120
	I0729 11:40:33.235672   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 40/120
	I0729 11:40:34.237055   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 41/120
	I0729 11:40:35.238628   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 42/120
	I0729 11:40:36.240069   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 43/120
	I0729 11:40:37.241904   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 44/120
	I0729 11:40:38.243917   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 45/120
	I0729 11:40:39.245482   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 46/120
	I0729 11:40:40.246960   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 47/120
	I0729 11:40:41.248430   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 48/120
	I0729 11:40:42.249959   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 49/120
	I0729 11:40:43.251834   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 50/120
	I0729 11:40:44.253422   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 51/120
	I0729 11:40:45.254828   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 52/120
	I0729 11:40:46.256290   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 53/120
	I0729 11:40:47.257759   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 54/120
	I0729 11:40:48.259636   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 55/120
	I0729 11:40:49.261420   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 56/120
	I0729 11:40:50.262749   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 57/120
	I0729 11:40:51.264215   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 58/120
	I0729 11:40:52.265443   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 59/120
	I0729 11:40:53.267425   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 60/120
	I0729 11:40:54.268724   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 61/120
	I0729 11:40:55.270135   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 62/120
	I0729 11:40:56.271524   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 63/120
	I0729 11:40:57.272810   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 64/120
	I0729 11:40:58.274789   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 65/120
	I0729 11:40:59.276057   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 66/120
	I0729 11:41:00.277440   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 67/120
	I0729 11:41:01.278973   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 68/120
	I0729 11:41:02.280441   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 69/120
	I0729 11:41:03.282667   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 70/120
	I0729 11:41:04.284120   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 71/120
	I0729 11:41:05.285602   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 72/120
	I0729 11:41:06.287029   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 73/120
	I0729 11:41:07.288536   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 74/120
	I0729 11:41:08.290191   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 75/120
	I0729 11:41:09.291532   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 76/120
	I0729 11:41:10.292935   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 77/120
	I0729 11:41:11.294333   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 78/120
	I0729 11:41:12.296295   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 79/120
	I0729 11:41:13.298622   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 80/120
	I0729 11:41:14.300286   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 81/120
	I0729 11:41:15.302246   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 82/120
	I0729 11:41:16.303878   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 83/120
	I0729 11:41:17.305659   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 84/120
	I0729 11:41:18.307309   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 85/120
	I0729 11:41:19.308744   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 86/120
	I0729 11:41:20.310088   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 87/120
	I0729 11:41:21.311624   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 88/120
	I0729 11:41:22.312967   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 89/120
	I0729 11:41:23.315273   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 90/120
	I0729 11:41:24.316677   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 91/120
	I0729 11:41:25.318386   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 92/120
	I0729 11:41:26.320151   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 93/120
	I0729 11:41:27.321737   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 94/120
	I0729 11:41:28.323277   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 95/120
	I0729 11:41:29.324610   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 96/120
	I0729 11:41:30.326202   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 97/120
	I0729 11:41:31.327889   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 98/120
	I0729 11:41:32.329475   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 99/120
	I0729 11:41:33.331578   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 100/120
	I0729 11:41:34.333349   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 101/120
	I0729 11:41:35.335286   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 102/120
	I0729 11:41:36.337121   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 103/120
	I0729 11:41:37.338522   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 104/120
	I0729 11:41:38.340083   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 105/120
	I0729 11:41:39.341875   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 106/120
	I0729 11:41:40.343424   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 107/120
	I0729 11:41:41.344756   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 108/120
	I0729 11:41:42.347049   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 109/120
	I0729 11:41:43.348650   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 110/120
	I0729 11:41:44.349990   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 111/120
	I0729 11:41:45.351807   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 112/120
	I0729 11:41:46.353570   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 113/120
	I0729 11:41:47.355172   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 114/120
	I0729 11:41:48.357469   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 115/120
	I0729 11:41:49.358892   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 116/120
	I0729 11:41:50.361226   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 117/120
	I0729 11:41:51.362631   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 118/120
	I0729 11:41:52.364302   68414 main.go:141] libmachine: (no-preload-297799) Waiting for machine to stop 119/120
	I0729 11:41:53.365792   68414 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 11:41:53.365871   68414 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 11:41:53.367750   68414 out.go:177] 
	W0729 11:41:53.369011   68414 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 11:41:53.369030   68414 out.go:239] * 
	* 
	W0729 11:41:53.371955   68414 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:41:53.373211   68414 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-297799 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-297799 -n no-preload-297799
E0729 11:41:58.596524   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:42:01.459289   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:42:06.187828   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:06.193149   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:06.203405   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:06.223700   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:06.264034   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:06.344361   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:06.504803   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:06.825690   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:07.466750   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:08.747892   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:11.308550   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-297799 -n no-preload-297799: exit status 3 (18.643987651s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:42:12.019096   69207 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E0729 11:42:12.019120   69207 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-297799" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-731235 --alsologtostderr -v=3
E0729 11:40:57.154323   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:41:17.635437   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:41:20.497577   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:41:20.502829   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:41:20.513107   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:41:20.533443   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:41:20.573750   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:41:20.654136   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:41:20.814354   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:41:20.969667   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 11:41:21.135544   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:41:21.776489   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:41:23.057279   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:41:25.617701   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-731235 --alsologtostderr -v=3: exit status 82 (2m0.531155211s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-731235"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:40:47.481079   68855 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:40:47.481195   68855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:40:47.481203   68855 out.go:304] Setting ErrFile to fd 2...
	I0729 11:40:47.481207   68855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:40:47.481393   68855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:40:47.481663   68855 out.go:298] Setting JSON to false
	I0729 11:40:47.481740   68855 mustload.go:65] Loading cluster: embed-certs-731235
	I0729 11:40:47.482088   68855 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:40:47.482185   68855 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/config.json ...
	I0729 11:40:47.482354   68855 mustload.go:65] Loading cluster: embed-certs-731235
	I0729 11:40:47.482453   68855 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:40:47.482481   68855 stop.go:39] StopHost: embed-certs-731235
	I0729 11:40:47.482884   68855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:40:47.482934   68855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:40:47.498966   68855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40621
	I0729 11:40:47.499484   68855 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:40:47.500337   68855 main.go:141] libmachine: Using API Version  1
	I0729 11:40:47.500354   68855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:40:47.500779   68855 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:40:47.503399   68855 out.go:177] * Stopping node "embed-certs-731235"  ...
	I0729 11:40:47.505129   68855 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 11:40:47.505192   68855 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:40:47.505491   68855 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 11:40:47.505521   68855 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:40:47.509165   68855 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:40:47.510821   68855 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:40:47.510929   68855 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:39:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:40:47.510988   68855 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:40:47.511042   68855 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:40:47.511384   68855 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:40:47.511626   68855 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:40:47.613671   68855 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 11:40:47.676974   68855 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 11:40:47.752894   68855 main.go:141] libmachine: Stopping "embed-certs-731235"...
	I0729 11:40:47.752931   68855 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:40:47.754599   68855 main.go:141] libmachine: (embed-certs-731235) Calling .Stop
	I0729 11:40:47.758515   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 0/120
	I0729 11:40:48.760031   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 1/120
	I0729 11:40:49.761448   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 2/120
	I0729 11:40:50.763008   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 3/120
	I0729 11:40:51.764571   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 4/120
	I0729 11:40:52.766657   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 5/120
	I0729 11:40:53.768449   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 6/120
	I0729 11:40:54.769820   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 7/120
	I0729 11:40:55.772069   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 8/120
	I0729 11:40:56.773416   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 9/120
	I0729 11:40:57.775081   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 10/120
	I0729 11:40:58.776380   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 11/120
	I0729 11:40:59.777731   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 12/120
	I0729 11:41:00.779131   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 13/120
	I0729 11:41:01.781603   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 14/120
	I0729 11:41:02.783690   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 15/120
	I0729 11:41:03.785347   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 16/120
	I0729 11:41:04.786766   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 17/120
	I0729 11:41:05.788094   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 18/120
	I0729 11:41:06.789433   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 19/120
	I0729 11:41:07.791486   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 20/120
	I0729 11:41:08.792894   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 21/120
	I0729 11:41:09.794203   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 22/120
	I0729 11:41:10.795548   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 23/120
	I0729 11:41:11.796850   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 24/120
	I0729 11:41:12.799140   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 25/120
	I0729 11:41:13.801374   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 26/120
	I0729 11:41:14.802824   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 27/120
	I0729 11:41:15.804509   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 28/120
	I0729 11:41:16.806311   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 29/120
	I0729 11:41:17.808521   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 30/120
	I0729 11:41:18.809851   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 31/120
	I0729 11:41:19.811362   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 32/120
	I0729 11:41:20.813168   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 33/120
	I0729 11:41:21.814990   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 34/120
	I0729 11:41:22.816727   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 35/120
	I0729 11:41:23.818323   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 36/120
	I0729 11:41:24.819690   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 37/120
	I0729 11:41:25.821073   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 38/120
	I0729 11:41:26.822658   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 39/120
	I0729 11:41:27.824744   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 40/120
	I0729 11:41:28.826128   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 41/120
	I0729 11:41:29.827648   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 42/120
	I0729 11:41:30.829214   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 43/120
	I0729 11:41:31.830502   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 44/120
	I0729 11:41:32.832552   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 45/120
	I0729 11:41:33.833865   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 46/120
	I0729 11:41:34.835233   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 47/120
	I0729 11:41:35.836607   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 48/120
	I0729 11:41:36.837801   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 49/120
	I0729 11:41:37.840231   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 50/120
	I0729 11:41:38.841593   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 51/120
	I0729 11:41:39.843089   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 52/120
	I0729 11:41:40.844469   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 53/120
	I0729 11:41:41.846972   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 54/120
	I0729 11:41:42.848887   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 55/120
	I0729 11:41:43.850332   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 56/120
	I0729 11:41:44.851739   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 57/120
	I0729 11:41:45.853430   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 58/120
	I0729 11:41:46.855036   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 59/120
	I0729 11:41:47.857275   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 60/120
	I0729 11:41:48.858728   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 61/120
	I0729 11:41:49.860074   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 62/120
	I0729 11:41:50.861333   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 63/120
	I0729 11:41:51.862784   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 64/120
	I0729 11:41:52.864677   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 65/120
	I0729 11:41:53.866171   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 66/120
	I0729 11:41:54.867622   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 67/120
	I0729 11:41:55.869034   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 68/120
	I0729 11:41:56.870574   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 69/120
	I0729 11:41:57.872875   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 70/120
	I0729 11:41:58.874233   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 71/120
	I0729 11:41:59.875858   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 72/120
	I0729 11:42:00.877394   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 73/120
	I0729 11:42:01.878858   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 74/120
	I0729 11:42:02.881204   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 75/120
	I0729 11:42:03.882842   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 76/120
	I0729 11:42:04.884200   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 77/120
	I0729 11:42:05.885684   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 78/120
	I0729 11:42:06.887134   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 79/120
	I0729 11:42:07.889491   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 80/120
	I0729 11:42:08.890944   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 81/120
	I0729 11:42:09.892507   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 82/120
	I0729 11:42:10.893808   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 83/120
	I0729 11:42:11.895334   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 84/120
	I0729 11:42:12.897464   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 85/120
	I0729 11:42:13.898979   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 86/120
	I0729 11:42:14.901250   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 87/120
	I0729 11:42:15.902794   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 88/120
	I0729 11:42:16.905056   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 89/120
	I0729 11:42:17.907655   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 90/120
	I0729 11:42:18.909333   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 91/120
	I0729 11:42:19.910737   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 92/120
	I0729 11:42:20.912196   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 93/120
	I0729 11:42:21.913455   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 94/120
	I0729 11:42:22.915644   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 95/120
	I0729 11:42:23.917006   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 96/120
	I0729 11:42:24.918454   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 97/120
	I0729 11:42:25.920037   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 98/120
	I0729 11:42:26.921449   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 99/120
	I0729 11:42:27.923930   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 100/120
	I0729 11:42:28.925581   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 101/120
	I0729 11:42:29.927313   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 102/120
	I0729 11:42:30.928637   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 103/120
	I0729 11:42:31.929850   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 104/120
	I0729 11:42:32.932117   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 105/120
	I0729 11:42:33.933756   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 106/120
	I0729 11:42:34.935211   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 107/120
	I0729 11:42:35.936751   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 108/120
	I0729 11:42:36.938352   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 109/120
	I0729 11:42:37.940740   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 110/120
	I0729 11:42:38.942273   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 111/120
	I0729 11:42:39.943611   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 112/120
	I0729 11:42:40.945086   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 113/120
	I0729 11:42:41.946986   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 114/120
	I0729 11:42:42.949093   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 115/120
	I0729 11:42:43.950871   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 116/120
	I0729 11:42:44.952161   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 117/120
	I0729 11:42:45.953678   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 118/120
	I0729 11:42:46.955154   68855 main.go:141] libmachine: (embed-certs-731235) Waiting for machine to stop 119/120
	I0729 11:42:47.956515   68855 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 11:42:47.956588   68855 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 11:42:47.958574   68855 out.go:177] 
	W0729 11:42:47.960172   68855 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 11:42:47.960189   68855 out.go:239] * 
	* 
	W0729 11:42:47.962639   68855 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:42:47.963982   68855 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-731235 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-731235 -n embed-certs-731235
E0729 11:42:54.056074   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:42:57.607462   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:42:57.612726   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:42:57.623004   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:42:57.643271   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:42:57.683578   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:42:57.763961   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:42:57.924544   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:42:58.245206   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:42:58.886220   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:43:00.166846   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:43:02.727575   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:43:03.511037   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-731235 -n embed-certs-731235: exit status 3 (18.580875423s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:43:06.547062   69681 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	E0729 11:43:06.547085   69681 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-731235" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-754486 --alsologtostderr -v=3
E0729 11:41:40.979136   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-754486 --alsologtostderr -v=3: exit status 82 (2m0.516102842s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-754486"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:41:38.612972   69137 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:41:38.613233   69137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:41:38.613242   69137 out.go:304] Setting ErrFile to fd 2...
	I0729 11:41:38.613247   69137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:41:38.613417   69137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:41:38.613643   69137 out.go:298] Setting JSON to false
	I0729 11:41:38.613723   69137 mustload.go:65] Loading cluster: default-k8s-diff-port-754486
	I0729 11:41:38.614046   69137 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:41:38.614113   69137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/config.json ...
	I0729 11:41:38.614299   69137 mustload.go:65] Loading cluster: default-k8s-diff-port-754486
	I0729 11:41:38.614400   69137 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:41:38.614429   69137 stop.go:39] StopHost: default-k8s-diff-port-754486
	I0729 11:41:38.614856   69137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:41:38.614909   69137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:41:38.629697   69137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I0729 11:41:38.630168   69137 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:41:38.630776   69137 main.go:141] libmachine: Using API Version  1
	I0729 11:41:38.630826   69137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:41:38.631133   69137 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:41:38.633248   69137 out.go:177] * Stopping node "default-k8s-diff-port-754486"  ...
	I0729 11:41:38.634516   69137 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 11:41:38.634556   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:41:38.634800   69137 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 11:41:38.634832   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:41:38.637490   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:41:38.637882   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:41:38.637914   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:41:38.638039   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:41:38.638197   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:41:38.638337   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:41:38.638501   69137 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:41:38.750041   69137 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 11:41:38.814833   69137 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 11:41:38.881638   69137 main.go:141] libmachine: Stopping "default-k8s-diff-port-754486"...
	I0729 11:41:38.881665   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:41:38.883233   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Stop
	I0729 11:41:38.886966   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 0/120
	I0729 11:41:39.888263   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 1/120
	I0729 11:41:40.889770   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 2/120
	I0729 11:41:41.891054   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 3/120
	I0729 11:41:42.892832   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 4/120
	I0729 11:41:43.894877   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 5/120
	I0729 11:41:44.896562   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 6/120
	I0729 11:41:45.897921   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 7/120
	I0729 11:41:46.899390   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 8/120
	I0729 11:41:47.901287   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 9/120
	I0729 11:41:48.902674   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 10/120
	I0729 11:41:49.904017   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 11/120
	I0729 11:41:50.905570   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 12/120
	I0729 11:41:51.907194   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 13/120
	I0729 11:41:52.908553   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 14/120
	I0729 11:41:53.910345   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 15/120
	I0729 11:41:54.911519   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 16/120
	I0729 11:41:55.912819   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 17/120
	I0729 11:41:56.914186   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 18/120
	I0729 11:41:57.915524   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 19/120
	I0729 11:41:58.917525   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 20/120
	I0729 11:41:59.918968   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 21/120
	I0729 11:42:00.920502   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 22/120
	I0729 11:42:01.921992   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 23/120
	I0729 11:42:02.923452   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 24/120
	I0729 11:42:03.925545   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 25/120
	I0729 11:42:04.926837   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 26/120
	I0729 11:42:05.928188   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 27/120
	I0729 11:42:06.929687   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 28/120
	I0729 11:42:07.931715   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 29/120
	I0729 11:42:08.932925   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 30/120
	I0729 11:42:09.934478   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 31/120
	I0729 11:42:10.935885   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 32/120
	I0729 11:42:11.937314   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 33/120
	I0729 11:42:12.938904   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 34/120
	I0729 11:42:13.941255   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 35/120
	I0729 11:42:14.942595   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 36/120
	I0729 11:42:15.943986   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 37/120
	I0729 11:42:16.945486   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 38/120
	I0729 11:42:17.946910   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 39/120
	I0729 11:42:18.949070   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 40/120
	I0729 11:42:19.950489   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 41/120
	I0729 11:42:20.951818   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 42/120
	I0729 11:42:21.953352   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 43/120
	I0729 11:42:22.954900   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 44/120
	I0729 11:42:23.956979   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 45/120
	I0729 11:42:24.958334   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 46/120
	I0729 11:42:25.959812   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 47/120
	I0729 11:42:26.961394   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 48/120
	I0729 11:42:27.962956   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 49/120
	I0729 11:42:28.965381   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 50/120
	I0729 11:42:29.966729   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 51/120
	I0729 11:42:30.968023   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 52/120
	I0729 11:42:31.969370   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 53/120
	I0729 11:42:32.970993   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 54/120
	I0729 11:42:33.973086   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 55/120
	I0729 11:42:34.974381   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 56/120
	I0729 11:42:35.975687   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 57/120
	I0729 11:42:36.976972   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 58/120
	I0729 11:42:37.978297   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 59/120
	I0729 11:42:38.980574   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 60/120
	I0729 11:42:39.982139   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 61/120
	I0729 11:42:40.983649   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 62/120
	I0729 11:42:41.985064   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 63/120
	I0729 11:42:42.986684   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 64/120
	I0729 11:42:43.988881   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 65/120
	I0729 11:42:44.990391   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 66/120
	I0729 11:42:45.991742   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 67/120
	I0729 11:42:46.993321   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 68/120
	I0729 11:42:47.995408   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 69/120
	I0729 11:42:48.997919   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 70/120
	I0729 11:42:49.999538   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 71/120
	I0729 11:42:51.001362   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 72/120
	I0729 11:42:52.003028   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 73/120
	I0729 11:42:53.004549   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 74/120
	I0729 11:42:54.006741   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 75/120
	I0729 11:42:55.008300   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 76/120
	I0729 11:42:56.009792   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 77/120
	I0729 11:42:57.011398   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 78/120
	I0729 11:42:58.012670   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 79/120
	I0729 11:42:59.014905   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 80/120
	I0729 11:43:00.016395   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 81/120
	I0729 11:43:01.017894   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 82/120
	I0729 11:43:02.019397   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 83/120
	I0729 11:43:03.021082   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 84/120
	I0729 11:43:04.023295   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 85/120
	I0729 11:43:05.024705   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 86/120
	I0729 11:43:06.026291   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 87/120
	I0729 11:43:07.027712   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 88/120
	I0729 11:43:08.029154   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 89/120
	I0729 11:43:09.031509   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 90/120
	I0729 11:43:10.032984   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 91/120
	I0729 11:43:11.034569   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 92/120
	I0729 11:43:12.036101   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 93/120
	I0729 11:43:13.037510   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 94/120
	I0729 11:43:14.039641   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 95/120
	I0729 11:43:15.041152   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 96/120
	I0729 11:43:16.042561   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 97/120
	I0729 11:43:17.043990   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 98/120
	I0729 11:43:18.045425   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 99/120
	I0729 11:43:19.047647   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 100/120
	I0729 11:43:20.049615   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 101/120
	I0729 11:43:21.050942   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 102/120
	I0729 11:43:22.052549   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 103/120
	I0729 11:43:23.054013   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 104/120
	I0729 11:43:24.056194   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 105/120
	I0729 11:43:25.057920   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 106/120
	I0729 11:43:26.059342   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 107/120
	I0729 11:43:27.060807   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 108/120
	I0729 11:43:28.062509   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 109/120
	I0729 11:43:29.064892   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 110/120
	I0729 11:43:30.066372   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 111/120
	I0729 11:43:31.067793   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 112/120
	I0729 11:43:32.069185   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 113/120
	I0729 11:43:33.070645   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 114/120
	I0729 11:43:34.072755   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 115/120
	I0729 11:43:35.074253   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 116/120
	I0729 11:43:36.075720   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 117/120
	I0729 11:43:37.077097   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 118/120
	I0729 11:43:38.078570   69137 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for machine to stop 119/120
	I0729 11:43:39.079223   69137 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 11:43:39.079277   69137 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 11:43:39.081297   69137 out.go:177] 
	W0729 11:43:39.082790   69137 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 11:43:39.082808   69137 out.go:239] * 
	* 
	W0729 11:43:39.085441   69137 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:43:39.086674   69137 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-754486 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486
E0729 11:43:40.250553   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486: exit status 3 (18.658217236s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:43:57.746989   70007 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.111:22: connect: no route to host
	E0729 11:43:57.747010   70007 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.111:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-754486" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-297799 -n no-preload-297799
E0729 11:42:13.095475   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:42:13.100739   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:42:13.111029   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:42:13.131315   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:42:13.171654   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:42:13.252067   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:42:13.412520   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:42:13.733597   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:42:14.373913   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-297799 -n no-preload-297799: exit status 3 (3.167780182s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:42:15.187046   69289 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E0729 11:42:15.187068   69289 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-297799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 11:42:15.654126   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:42:16.428865   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:18.214407   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-297799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152403697s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-297799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-297799 -n no-preload-297799
E0729 11:42:23.334689   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-297799 -n no-preload-297799: exit status 3 (3.06329114s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:42:24.403128   69373 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E0729 11:42:24.403147   69373 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-297799" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-188043 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-188043 create -f testdata/busybox.yaml: exit status 1 (42.373464ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-188043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-188043 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 6 (218.528456ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:42:40.231057   69550 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-188043" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 6 (220.883884ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:42:40.451355   69580 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-188043" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-188043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0729 11:42:42.419725   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:42:47.150815   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-188043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m50.001429104s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-188043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-188043 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-188043 describe deploy/metrics-server -n kube-system: exit status 1 (43.374283ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-188043" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-188043 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 6 (218.463381ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:44:30.715551   70343 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-188043" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-731235 -n embed-certs-731235
E0729 11:43:07.848429   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-731235 -n embed-certs-731235: exit status 3 (3.169191501s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:43:09.715025   69777 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	E0729 11:43:09.715053   69777 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-731235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-731235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151094361s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-731235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-731235 -n embed-certs-731235
E0729 11:43:18.088761   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-731235 -n embed-certs-731235: exit status 3 (3.063281696s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:43:18.931067   69860 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	E0729 11:43:18.931089   69860 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-731235" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486
E0729 11:44:00.731333   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486: exit status 3 (3.167692266s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:44:00.915043   70121 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.111:22: connect: no route to host
	E0729 11:44:00.915065   70121 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.111:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-754486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 11:44:04.340400   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-754486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153169147s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.111:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-754486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486: exit status 3 (3.062712611s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:44:10.131112   70185 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.111:22: connect: no route to host
	E0729 11:44:10.131135   70185 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.111:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-754486" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (699.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-188043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0729 11:44:41.692613   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:44:44.171667   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:50.032920   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:44:56.937657   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:44:57.915806   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 11:45:04.652323   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:45:36.673476   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:45:41.451396   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:45:45.613783   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:46:03.612807   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:46:04.357476   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:46:20.497789   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:46:48.181369   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:47:06.187589   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:47:07.534929   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:47:13.094680   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:47:33.873773   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:47:40.778150   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:47:57.607858   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:48:03.511663   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 11:48:19.769266   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:48:25.291894   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:48:47.452990   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:49:23.690090   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:49:26.561827   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 11:49:51.375894   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:49:57.916247   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 11:50:36.673484   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:51:20.496930   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
E0729 11:52:06.187642   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:52:13.094425   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-188043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m35.665482824s)

                                                
                                                
-- stdout --
	* [old-k8s-version-188043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-188043" primary control-plane node in "old-k8s-version-188043" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-188043" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:44:35.218497   70480 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:44:35.218591   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218595   70480 out.go:304] Setting ErrFile to fd 2...
	I0729 11:44:35.218599   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218821   70480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:44:35.219357   70480 out.go:298] Setting JSON to false
	I0729 11:44:35.220249   70480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5221,"bootTime":1722248254,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:44:35.220304   70480 start.go:139] virtualization: kvm guest
	I0729 11:44:35.222361   70480 out.go:177] * [old-k8s-version-188043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:44:35.223849   70480 notify.go:220] Checking for updates...
	I0729 11:44:35.223857   70480 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:44:35.225068   70480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:44:35.226167   70480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:44:35.227448   70480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:44:35.228516   70480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:44:35.229771   70480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:44:35.231218   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:44:35.231601   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.231634   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.246457   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0729 11:44:35.246861   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.247335   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.247371   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.247712   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.247899   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.249642   70480 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 11:44:35.250805   70480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:44:35.251114   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.251152   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.265900   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0729 11:44:35.266309   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.266767   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.266794   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.267099   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.267293   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.302182   70480 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:44:35.303575   70480 start.go:297] selected driver: kvm2
	I0729 11:44:35.303593   70480 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.303689   70480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:44:35.304334   70480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.304396   70480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:44:35.319674   70480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:44:35.320023   70480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:44:35.320054   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:44:35.320061   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:44:35.320111   70480 start.go:340] cluster config:
	{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.320210   70480 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.322089   70480 out.go:177] * Starting "old-k8s-version-188043" primary control-plane node in "old-k8s-version-188043" cluster
	I0729 11:44:35.323173   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:44:35.323208   70480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 11:44:35.323221   70480 cache.go:56] Caching tarball of preloaded images
	I0729 11:44:35.323282   70480 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:44:35.323291   70480 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 11:44:35.323390   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:44:35.323557   70480 start.go:360] acquireMachinesLock for old-k8s-version-188043: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:47:42.625107   70480 start.go:364] duration metric: took 3m7.301517115s to acquireMachinesLock for "old-k8s-version-188043"
	I0729 11:47:42.625180   70480 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:42.625189   70480 fix.go:54] fixHost starting: 
	I0729 11:47:42.625660   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:42.625704   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:42.644136   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
	I0729 11:47:42.644661   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:42.645210   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:47:42.645242   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:42.645570   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:42.645748   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:47:42.645875   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetState
	I0729 11:47:42.647762   70480 fix.go:112] recreateIfNeeded on old-k8s-version-188043: state=Stopped err=<nil>
	I0729 11:47:42.647808   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	W0729 11:47:42.647970   70480 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:42.649883   70480 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-188043" ...
	I0729 11:47:42.651248   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .Start
	I0729 11:47:42.651431   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring networks are active...
	I0729 11:47:42.652248   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network default is active
	I0729 11:47:42.652632   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network mk-old-k8s-version-188043 is active
	I0729 11:47:42.653108   70480 main.go:141] libmachine: (old-k8s-version-188043) Getting domain xml...
	I0729 11:47:42.653902   70480 main.go:141] libmachine: (old-k8s-version-188043) Creating domain...
	I0729 11:47:43.961872   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting to get IP...
	I0729 11:47:43.962928   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:43.963321   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:43.963424   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:43.963311   71329 retry.go:31] will retry after 266.698669ms: waiting for machine to come up
	I0729 11:47:44.231914   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.232480   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.232507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.232428   71329 retry.go:31] will retry after 363.884342ms: waiting for machine to come up
	I0729 11:47:44.598046   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.598507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.598530   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.598465   71329 retry.go:31] will retry after 486.214401ms: waiting for machine to come up
	I0729 11:47:45.085968   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.086466   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.086511   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.086440   71329 retry.go:31] will retry after 451.181437ms: waiting for machine to come up
	I0729 11:47:45.538886   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.539448   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.539474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.539387   71329 retry.go:31] will retry after 730.083235ms: waiting for machine to come up
	I0729 11:47:46.270923   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.271428   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.271457   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.271383   71329 retry.go:31] will retry after 699.351116ms: waiting for machine to come up
	I0729 11:47:46.973033   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.973661   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.973689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.973606   71329 retry.go:31] will retry after 804.992714ms: waiting for machine to come up
	I0729 11:47:47.780217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:47.780676   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:47.780727   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:47.780651   71329 retry.go:31] will retry after 1.242092613s: waiting for machine to come up
	I0729 11:47:49.024835   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:49.025362   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:49.025384   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:49.025314   71329 retry.go:31] will retry after 1.505477936s: waiting for machine to come up
	I0729 11:47:50.532816   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:50.533325   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:50.533354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:50.533285   71329 retry.go:31] will retry after 1.877421564s: waiting for machine to come up
	I0729 11:47:52.413018   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:52.413474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:52.413500   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:52.413405   71329 retry.go:31] will retry after 2.017909532s: waiting for machine to come up
	I0729 11:47:54.432996   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:54.433478   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:54.433506   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:54.433444   71329 retry.go:31] will retry after 2.469357423s: waiting for machine to come up
	I0729 11:47:56.904201   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:56.904719   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:56.904754   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:56.904677   71329 retry.go:31] will retry after 4.224733299s: waiting for machine to come up
	I0729 11:48:01.132270   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132669   70480 main.go:141] libmachine: (old-k8s-version-188043) Found IP for machine: 192.168.72.61
	I0729 11:48:01.132695   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has current primary IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132702   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserving static IP address...
	I0729 11:48:01.133140   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.133173   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | skip adding static IP to network mk-old-k8s-version-188043 - found existing host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"}
	I0729 11:48:01.133189   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserved static IP address: 192.168.72.61
	I0729 11:48:01.133203   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting for SSH to be available...
	I0729 11:48:01.133217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Getting to WaitForSSH function...
	I0729 11:48:01.135427   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135759   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.135786   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135866   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH client type: external
	I0729 11:48:01.135894   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa (-rw-------)
	I0729 11:48:01.135931   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:01.135949   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | About to run SSH command:
	I0729 11:48:01.135986   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | exit 0
	I0729 11:48:01.262756   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:01.263165   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetConfigRaw
	I0729 11:48:01.263828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.266322   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266662   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.266689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266930   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:48:01.267115   70480 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:01.267132   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:01.267357   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.269973   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270331   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.270354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270502   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.270686   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270863   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.271183   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.271391   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.271405   70480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:01.383463   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:01.383492   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383771   70480 buildroot.go:166] provisioning hostname "old-k8s-version-188043"
	I0729 11:48:01.383795   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.387076   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387411   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.387449   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387583   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.387776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.387929   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.388052   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.388237   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.388396   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.388409   70480 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-188043 && echo "old-k8s-version-188043" | sudo tee /etc/hostname
	I0729 11:48:01.519219   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-188043
	
	I0729 11:48:01.519252   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.521972   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522356   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.522385   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522533   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.522755   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.522955   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.523074   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.523276   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.523452   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.523470   70480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-188043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-188043/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-188043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:01.644387   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:01.644421   70480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:01.644468   70480 buildroot.go:174] setting up certificates
	I0729 11:48:01.644480   70480 provision.go:84] configureAuth start
	I0729 11:48:01.644499   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.644781   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.647721   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648133   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.648162   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648322   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.650422   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.650857   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.650883   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.651028   70480 provision.go:143] copyHostCerts
	I0729 11:48:01.651088   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:01.651101   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:01.651160   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:01.651249   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:01.651257   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:01.651277   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:01.651329   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:01.651336   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:01.651352   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:01.651408   70480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-188043 san=[127.0.0.1 192.168.72.61 localhost minikube old-k8s-version-188043]
	I0729 11:48:01.754387   70480 provision.go:177] copyRemoteCerts
	I0729 11:48:01.754442   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:01.754468   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.757420   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.757770   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.757803   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.758031   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.758220   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.758416   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.758574   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:01.845306   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:01.872221   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 11:48:01.897935   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:01.924742   70480 provision.go:87] duration metric: took 280.248018ms to configureAuth
	I0729 11:48:01.924780   70480 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:01.924957   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:48:01.925042   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.927450   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.927780   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.927949   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.927956   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.928160   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928344   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928511   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.928677   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.928831   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.928850   70480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:02.213344   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:02.213375   70480 machine.go:97] duration metric: took 946.247614ms to provisionDockerMachine
	I0729 11:48:02.213405   70480 start.go:293] postStartSetup for "old-k8s-version-188043" (driver="kvm2")
	I0729 11:48:02.213422   70480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:02.213469   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.213811   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:02.213869   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.216897   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217219   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.217253   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217388   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.217603   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.217776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.217957   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.305894   70480 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:02.310875   70480 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:02.310907   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:02.310982   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:02.311105   70480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:02.311264   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:02.321616   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:02.349732   70480 start.go:296] duration metric: took 136.310898ms for postStartSetup
	I0729 11:48:02.349788   70480 fix.go:56] duration metric: took 19.724598498s for fixHost
	I0729 11:48:02.349828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.352855   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353194   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.353226   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353348   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.353575   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353802   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353983   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.354172   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:02.354428   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:02.354445   70480 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 11:48:02.472983   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253682.444336856
	
	I0729 11:48:02.473008   70480 fix.go:216] guest clock: 1722253682.444336856
	I0729 11:48:02.473017   70480 fix.go:229] Guest: 2024-07-29 11:48:02.444336856 +0000 UTC Remote: 2024-07-29 11:48:02.349793034 +0000 UTC m=+207.164903125 (delta=94.543822ms)
	I0729 11:48:02.473042   70480 fix.go:200] guest clock delta is within tolerance: 94.543822ms
	I0729 11:48:02.473049   70480 start.go:83] releasing machines lock for "old-k8s-version-188043", held for 19.847898477s
	I0729 11:48:02.473077   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.473414   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:02.476555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.476980   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.477010   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.477343   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.477892   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478095   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478187   70480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:02.478230   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.478285   70480 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:02.478331   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.481065   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481223   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481484   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481626   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.481665   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481705   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481845   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.481958   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.482032   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482114   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.482302   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482316   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.482466   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.569312   70480 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:02.593778   70480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:02.744117   70480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:02.752292   70480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:02.752380   70480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:02.774488   70480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:02.774515   70480 start.go:495] detecting cgroup driver to use...
	I0729 11:48:02.774581   70480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:02.793491   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:02.814235   70480 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:02.814294   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:02.832545   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:02.848433   70480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:02.980201   70480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:03.163341   70480 docker.go:233] disabling docker service ...
	I0729 11:48:03.163420   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:03.178758   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:03.197879   70480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:03.331372   70480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:03.462987   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:03.480583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:03.504239   70480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 11:48:03.504312   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.518440   70480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:03.518494   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.532072   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.543435   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.555232   70480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:03.568372   70480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:03.582423   70480 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:03.582482   70480 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:03.601891   70480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:03.614380   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:03.741008   70480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:03.896807   70480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:03.896878   70480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:03.903541   70480 start.go:563] Will wait 60s for crictl version
	I0729 11:48:03.903604   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:03.908408   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:03.952850   70480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:03.952946   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:03.984204   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:04.018650   70480 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 11:48:04.020067   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:04.023182   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023542   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:04.023571   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023796   70480 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:04.028450   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:04.042324   70480 kubeadm.go:883] updating cluster {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:04.042474   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:48:04.042540   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:04.092644   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:04.092699   70480 ssh_runner.go:195] Run: which lz4
	I0729 11:48:04.096834   70480 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 11:48:04.101297   70480 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:48:04.101328   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 11:48:05.929023   70480 crio.go:462] duration metric: took 1.832213096s to copy over tarball
	I0729 11:48:05.929116   70480 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:48:09.096321   70480 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.167180641s)
	I0729 11:48:09.096346   70480 crio.go:469] duration metric: took 3.16729016s to extract the tarball
	I0729 11:48:09.096353   70480 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:48:09.154049   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:09.193067   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:09.193104   70480 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:09.193246   70480 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.193211   70480 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.193280   70480 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 11:48:09.193282   70480 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.193298   70480 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.193217   70480 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.193270   70480 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.193261   70480 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.194797   70480 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194815   70480 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.194846   70480 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.194885   70480 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 11:48:09.194789   70480 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.195165   70480 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.412658   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 11:48:09.423832   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.465209   70480 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 11:48:09.465256   70480 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 11:48:09.465312   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.475896   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 11:48:09.475946   70480 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 11:48:09.475983   70480 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.476028   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.511579   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.511606   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 11:48:09.547233   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 11:48:09.554773   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.566944   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.567736   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.574369   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.587705   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.650871   70480 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 11:48:09.650920   70480 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.650969   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692828   70480 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 11:48:09.692876   70480 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 11:48:09.692913   70480 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.692884   70480 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.692968   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692989   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.711985   70480 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 11:48:09.712027   70480 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.712073   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.713847   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.713883   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.713918   70480 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 11:48:09.713950   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.713959   70480 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.713997   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.718719   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.820952   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 11:48:09.820988   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 11:48:09.821051   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.821058   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 11:48:09.821142   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 11:48:09.861235   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 11:48:10.107019   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:10.251097   70480 cache_images.go:92] duration metric: took 1.057971213s to LoadCachedImages
	W0729 11:48:10.251194   70480 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0729 11:48:10.251210   70480 kubeadm.go:934] updating node { 192.168.72.61 8443 v1.20.0 crio true true} ...
	I0729 11:48:10.251341   70480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-188043 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:10.251447   70480 ssh_runner.go:195] Run: crio config
	I0729 11:48:10.310573   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:48:10.310594   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:10.310601   70480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:10.310618   70480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.61 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-188043 NodeName:old-k8s-version-188043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 11:48:10.310786   70480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-188043"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:10.310847   70480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 11:48:10.321378   70480 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:10.321463   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:10.331285   70480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 11:48:10.348814   70480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:48:10.368593   70480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 11:48:10.387619   70480 ssh_runner.go:195] Run: grep 192.168.72.61	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:10.392096   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:10.405499   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:10.532293   70480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:10.549883   70480 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043 for IP: 192.168.72.61
	I0729 11:48:10.549910   70480 certs.go:194] generating shared ca certs ...
	I0729 11:48:10.549930   70480 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:10.550110   70480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:10.550166   70480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:10.550180   70480 certs.go:256] generating profile certs ...
	I0729 11:48:10.550299   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.key
	I0729 11:48:10.550376   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4
	I0729 11:48:10.550428   70480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key
	I0729 11:48:10.550564   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:10.550604   70480 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:10.550617   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:10.550648   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:10.550678   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:10.550730   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:10.550787   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:10.551421   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:10.588571   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:10.644840   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:10.697757   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:10.755085   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 11:48:10.800901   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:10.834650   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:10.866662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:10.895657   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:10.923565   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:10.948662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:10.976094   70480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:10.998391   70480 ssh_runner.go:195] Run: openssl version
	I0729 11:48:11.006024   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:11.019080   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024428   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024498   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.031468   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:11.043919   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:11.055933   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061208   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061284   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.067590   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:11.079323   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:11.091404   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096768   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096838   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.103571   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:11.116286   70480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:11.121569   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:11.128426   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:11.134679   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:11.141595   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:11.148605   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:11.155402   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:11.162290   70480 kubeadm.go:392] StartCluster: {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:11.162394   70480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:11.162441   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.206880   70480 cri.go:89] found id: ""
	I0729 11:48:11.206962   70480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:11.218154   70480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:11.218187   70480 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:11.218253   70480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:11.228734   70480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:11.229980   70480 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:48:11.230942   70480 kubeconfig.go:62] /home/jenkins/minikube-integration/19337-3845/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-188043" cluster setting kubeconfig missing "old-k8s-version-188043" context setting]
	I0729 11:48:11.231876   70480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:11.347256   70480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:11.358515   70480 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.61
	I0729 11:48:11.358654   70480 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:11.358668   70480 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:11.358753   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.402396   70480 cri.go:89] found id: ""
	I0729 11:48:11.402470   70480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:11.420623   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:11.431963   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:11.431989   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:11.432060   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:11.442517   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:11.442584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:11.454407   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:11.465534   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:11.465607   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:11.477716   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.489553   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:11.489625   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.501776   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:11.514863   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:11.514931   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:11.526206   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:11.536583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:11.674846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:12.763352   70480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.088463809s)
	I0729 11:48:12.763396   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.039246   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.168621   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.295910   70480 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:13.296008   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:13.797084   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.296523   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.796520   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:15.296251   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:15.796738   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.296361   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.796755   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.296237   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.796188   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.296099   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.796433   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.296864   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.796931   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:20.296052   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:20.796633   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.296412   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.797025   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.296524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.796719   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.296741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.796133   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.296709   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.796699   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.296898   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.796712   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.297094   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.796384   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.296247   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.296890   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.796799   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.296947   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.297007   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.797055   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.296172   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.796379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.296834   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.796689   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.296129   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.796275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.297038   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.297061   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.796828   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.296156   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.796245   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.297045   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.796103   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.296453   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.796146   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.296670   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.796357   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:40.296481   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:40.796161   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.296479   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.796634   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.296314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.796986   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.297060   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.296048   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.796488   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.297079   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.796411   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.297077   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.796676   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.296378   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.796359   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.296252   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.796407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.296669   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.796307   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.296349   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.796922   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.296977   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.797050   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.296223   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.796774   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.296621   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.796300   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.296480   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.796902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.296128   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.796141   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.296196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.796435   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.296155   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.796741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.796190   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.296902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.797062   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.296074   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.796430   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.296402   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.796722   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.296594   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.297020   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.796865   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.296072   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.796318   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:05.296957   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:05.796485   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.296051   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.296457   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.796342   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.796449   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.297078   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.796713   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:10.296959   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:10.796677   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.296975   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.796715   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.296262   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.796937   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:13.296939   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:13.297014   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:13.340009   70480 cri.go:89] found id: ""
	I0729 11:49:13.340034   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.340041   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:13.340047   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:13.340112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:13.378003   70480 cri.go:89] found id: ""
	I0729 11:49:13.378029   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.378037   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:13.378044   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:13.378098   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:13.417130   70480 cri.go:89] found id: ""
	I0729 11:49:13.417162   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.417169   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:13.417175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:13.417253   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:13.461969   70480 cri.go:89] found id: ""
	I0729 11:49:13.462001   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.462012   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:13.462019   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:13.462089   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:13.502341   70480 cri.go:89] found id: ""
	I0729 11:49:13.502369   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.502375   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:13.502382   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:13.502434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:13.538099   70480 cri.go:89] found id: ""
	I0729 11:49:13.538123   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.538137   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:13.538143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:13.538192   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:13.575136   70480 cri.go:89] found id: ""
	I0729 11:49:13.575165   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.575172   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:13.575180   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:13.575241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:13.609066   70480 cri.go:89] found id: ""
	I0729 11:49:13.609098   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.609106   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:13.609114   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:13.609126   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:13.622813   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:13.622842   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:13.765497   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:13.765519   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:13.765532   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:13.831517   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:13.831554   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:13.877738   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:13.877771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.429475   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:16.447454   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:16.447528   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:16.486991   70480 cri.go:89] found id: ""
	I0729 11:49:16.487018   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.487028   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:16.487036   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:16.487095   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:16.526155   70480 cri.go:89] found id: ""
	I0729 11:49:16.526180   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.526187   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:16.526192   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:16.526251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:16.562054   70480 cri.go:89] found id: ""
	I0729 11:49:16.562080   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.562156   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:16.562175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:16.562229   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:16.598866   70480 cri.go:89] found id: ""
	I0729 11:49:16.598896   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.598907   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:16.598915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:16.598984   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:16.637590   70480 cri.go:89] found id: ""
	I0729 11:49:16.637615   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.637623   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:16.637628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:16.637677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:16.676712   70480 cri.go:89] found id: ""
	I0729 11:49:16.676738   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.676749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:16.676756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:16.676844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:16.717213   70480 cri.go:89] found id: ""
	I0729 11:49:16.717242   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.717250   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:16.717256   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:16.717309   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:16.755619   70480 cri.go:89] found id: ""
	I0729 11:49:16.755644   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.755652   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:16.755660   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:16.755677   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:16.825987   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:16.826023   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:16.869351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:16.869386   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.920850   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:16.920888   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:16.935884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:16.935921   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:17.021524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.521810   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:19.534761   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:19.534826   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:19.571988   70480 cri.go:89] found id: ""
	I0729 11:49:19.572019   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.572029   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:19.572037   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:19.572097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:19.607389   70480 cri.go:89] found id: ""
	I0729 11:49:19.607418   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.607427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:19.607434   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:19.607496   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:19.645810   70480 cri.go:89] found id: ""
	I0729 11:49:19.645842   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.645853   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:19.645861   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:19.645924   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:19.683628   70480 cri.go:89] found id: ""
	I0729 11:49:19.683655   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.683663   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:19.683669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:19.683715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:19.719396   70480 cri.go:89] found id: ""
	I0729 11:49:19.719424   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.719435   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:19.719442   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:19.719503   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:19.755341   70480 cri.go:89] found id: ""
	I0729 11:49:19.755372   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.755383   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:19.755390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:19.755443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:19.792562   70480 cri.go:89] found id: ""
	I0729 11:49:19.792594   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.792604   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:19.792611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:19.792674   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:19.826783   70480 cri.go:89] found id: ""
	I0729 11:49:19.826808   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.826815   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:19.826824   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:19.826835   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:19.878538   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:19.878573   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:19.893066   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:19.893094   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:19.966152   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.966177   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:19.966190   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:20.042796   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:20.042831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:22.581639   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:22.595713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:22.595791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:22.639198   70480 cri.go:89] found id: ""
	I0729 11:49:22.639227   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.639239   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:22.639247   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:22.639304   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:22.681073   70480 cri.go:89] found id: ""
	I0729 11:49:22.681103   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.681117   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:22.681124   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:22.681183   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:22.717186   70480 cri.go:89] found id: ""
	I0729 11:49:22.717216   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.717226   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:22.717233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:22.717293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:22.755508   70480 cri.go:89] found id: ""
	I0729 11:49:22.755536   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.755546   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:22.755563   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:22.755626   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:22.800450   70480 cri.go:89] found id: ""
	I0729 11:49:22.800484   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.800495   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:22.800503   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:22.800567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:22.845555   70480 cri.go:89] found id: ""
	I0729 11:49:22.845581   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.845588   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:22.845594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:22.845643   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:22.895449   70480 cri.go:89] found id: ""
	I0729 11:49:22.895476   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.895483   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:22.895488   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:22.895536   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:22.943041   70480 cri.go:89] found id: ""
	I0729 11:49:22.943071   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.943081   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:22.943092   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:22.943108   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:23.002403   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:23.002448   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:23.019436   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:23.019463   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:23.090680   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:23.090718   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:23.090734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:23.173647   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:23.173687   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:25.719961   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:25.733760   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:25.733829   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:25.769965   70480 cri.go:89] found id: ""
	I0729 11:49:25.769997   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.770008   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:25.770015   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:25.770079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:25.805786   70480 cri.go:89] found id: ""
	I0729 11:49:25.805818   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.805829   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:25.805836   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:25.805899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:25.841948   70480 cri.go:89] found id: ""
	I0729 11:49:25.841978   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.841988   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:25.841996   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:25.842056   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:25.880600   70480 cri.go:89] found id: ""
	I0729 11:49:25.880626   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.880636   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:25.880644   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:25.880710   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:25.918637   70480 cri.go:89] found id: ""
	I0729 11:49:25.918671   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.918683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:25.918691   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:25.918766   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:25.955683   70480 cri.go:89] found id: ""
	I0729 11:49:25.955716   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.955726   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:25.955733   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:25.955793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:25.991796   70480 cri.go:89] found id: ""
	I0729 11:49:25.991826   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.991835   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:25.991844   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:25.991908   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:26.027595   70480 cri.go:89] found id: ""
	I0729 11:49:26.027623   70480 logs.go:276] 0 containers: []
	W0729 11:49:26.027634   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:26.027644   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:26.027658   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:26.114463   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:26.114500   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:26.156798   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:26.156834   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:26.206910   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:26.206940   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:26.221037   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:26.221065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:26.293788   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:28.794321   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:28.809573   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:28.809632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:28.860918   70480 cri.go:89] found id: ""
	I0729 11:49:28.860945   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.860952   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:28.860958   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:28.861011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:28.913038   70480 cri.go:89] found id: ""
	I0729 11:49:28.913069   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.913078   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:28.913085   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:28.913147   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:28.963679   70480 cri.go:89] found id: ""
	I0729 11:49:28.963704   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.963714   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:28.963722   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:28.963787   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:29.003935   70480 cri.go:89] found id: ""
	I0729 11:49:29.003962   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.003970   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:29.003976   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:29.004033   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:29.044990   70480 cri.go:89] found id: ""
	I0729 11:49:29.045027   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.045034   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:29.045040   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:29.045096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:29.079923   70480 cri.go:89] found id: ""
	I0729 11:49:29.079945   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.079953   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:29.079958   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:29.080004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:29.117495   70480 cri.go:89] found id: ""
	I0729 11:49:29.117520   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.117528   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:29.117534   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:29.117580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:29.155254   70480 cri.go:89] found id: ""
	I0729 11:49:29.155285   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.155295   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:29.155305   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:29.155319   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:29.207659   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:29.207698   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:29.221875   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:29.221904   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:29.295613   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:29.295637   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:29.295647   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:29.376114   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:29.376148   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:31.916592   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:31.930301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:31.930373   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:31.969552   70480 cri.go:89] found id: ""
	I0729 11:49:31.969580   70480 logs.go:276] 0 containers: []
	W0729 11:49:31.969588   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:31.969594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:31.969650   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:32.003765   70480 cri.go:89] found id: ""
	I0729 11:49:32.003795   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.003804   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:32.003811   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:32.003873   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:32.038447   70480 cri.go:89] found id: ""
	I0729 11:49:32.038475   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.038486   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:32.038492   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:32.038558   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:32.077766   70480 cri.go:89] found id: ""
	I0729 11:49:32.077793   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.077805   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:32.077813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:32.077866   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:32.112599   70480 cri.go:89] found id: ""
	I0729 11:49:32.112630   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.112640   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:32.112648   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:32.112711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:32.150385   70480 cri.go:89] found id: ""
	I0729 11:49:32.150410   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.150417   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:32.150423   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:32.150481   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:32.185148   70480 cri.go:89] found id: ""
	I0729 11:49:32.185172   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.185182   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:32.185189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:32.185251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:32.219652   70480 cri.go:89] found id: ""
	I0729 11:49:32.219685   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.219696   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:32.219706   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:32.219720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:32.233440   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:32.233468   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:32.300495   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:32.300523   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:32.300540   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:32.381361   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:32.381400   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:32.422678   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:32.422730   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:34.974183   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:34.987832   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:34.987912   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:35.028216   70480 cri.go:89] found id: ""
	I0729 11:49:35.028251   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.028262   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:35.028269   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:35.028333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:35.067585   70480 cri.go:89] found id: ""
	I0729 11:49:35.067616   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.067626   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:35.067634   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:35.067698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:35.103319   70480 cri.go:89] found id: ""
	I0729 11:49:35.103346   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.103355   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:35.103362   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:35.103426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:35.138637   70480 cri.go:89] found id: ""
	I0729 11:49:35.138673   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.138714   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:35.138726   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:35.138804   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:35.176249   70480 cri.go:89] found id: ""
	I0729 11:49:35.176285   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.176293   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:35.176298   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:35.176358   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:35.218168   70480 cri.go:89] found id: ""
	I0729 11:49:35.218194   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.218202   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:35.218208   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:35.218265   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:35.253594   70480 cri.go:89] found id: ""
	I0729 11:49:35.253634   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.253641   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:35.253655   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:35.253716   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:35.289229   70480 cri.go:89] found id: ""
	I0729 11:49:35.289258   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.289269   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:35.289279   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:35.289294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:35.341152   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:35.341186   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:35.355884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:35.355925   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:35.426135   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:35.426160   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:35.426172   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:35.508387   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:35.508422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.047364   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:38.061026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:38.061088   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:38.096252   70480 cri.go:89] found id: ""
	I0729 11:49:38.096280   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.096290   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:38.096297   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:38.096365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:38.133007   70480 cri.go:89] found id: ""
	I0729 11:49:38.133040   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.133051   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:38.133058   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:38.133130   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:38.168040   70480 cri.go:89] found id: ""
	I0729 11:49:38.168063   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.168073   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:38.168086   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:38.168160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:38.205440   70480 cri.go:89] found id: ""
	I0729 11:49:38.205464   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.205471   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:38.205476   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:38.205524   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:38.243345   70480 cri.go:89] found id: ""
	I0729 11:49:38.243373   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.243383   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:38.243390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:38.243449   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:38.278506   70480 cri.go:89] found id: ""
	I0729 11:49:38.278537   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.278549   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:38.278557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:38.278616   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:38.317917   70480 cri.go:89] found id: ""
	I0729 11:49:38.317951   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.317962   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:38.317970   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:38.318032   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:38.353198   70480 cri.go:89] found id: ""
	I0729 11:49:38.353228   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.353236   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:38.353245   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:38.353259   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:38.367239   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:38.367268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:38.445964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:38.445989   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:38.446002   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:38.528232   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:38.528268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.565958   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:38.565992   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:41.123139   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:41.137233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:41.137299   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:41.179468   70480 cri.go:89] found id: ""
	I0729 11:49:41.179492   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.179502   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:41.179508   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:41.179559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:41.215586   70480 cri.go:89] found id: ""
	I0729 11:49:41.215612   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.215620   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:41.215625   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:41.215682   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:41.259473   70480 cri.go:89] found id: ""
	I0729 11:49:41.259495   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.259503   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:41.259508   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:41.259562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:41.302914   70480 cri.go:89] found id: ""
	I0729 11:49:41.302939   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.302947   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:41.302953   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:41.303012   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:41.339826   70480 cri.go:89] found id: ""
	I0729 11:49:41.339857   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.339868   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:41.339876   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:41.339944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:41.376044   70480 cri.go:89] found id: ""
	I0729 11:49:41.376067   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.376074   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:41.376080   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:41.376126   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:41.412216   70480 cri.go:89] found id: ""
	I0729 11:49:41.412241   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.412249   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:41.412255   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:41.412311   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:41.448264   70480 cri.go:89] found id: ""
	I0729 11:49:41.448294   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.448305   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:41.448315   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:41.448331   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:41.499936   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:41.499974   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:41.517126   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:41.517151   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:41.590153   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:41.590185   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:41.590201   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:41.670830   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:41.670866   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.212782   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:44.226750   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:44.226815   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:44.261492   70480 cri.go:89] found id: ""
	I0729 11:49:44.261517   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.261524   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:44.261530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:44.261577   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:44.296391   70480 cri.go:89] found id: ""
	I0729 11:49:44.296426   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.296435   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:44.296444   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:44.296510   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:44.333335   70480 cri.go:89] found id: ""
	I0729 11:49:44.333365   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.333377   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:44.333384   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:44.333447   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:44.370607   70480 cri.go:89] found id: ""
	I0729 11:49:44.370639   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.370650   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:44.370657   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:44.370734   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:44.404229   70480 cri.go:89] found id: ""
	I0729 11:49:44.404257   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.404265   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:44.404271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:44.404332   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:44.439198   70480 cri.go:89] found id: ""
	I0729 11:49:44.439227   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.439238   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:44.439244   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:44.439302   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:44.473835   70480 cri.go:89] found id: ""
	I0729 11:49:44.473887   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.473899   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:44.473908   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:44.473971   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:44.518006   70480 cri.go:89] found id: ""
	I0729 11:49:44.518031   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.518040   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:44.518050   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:44.518065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.560188   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:44.560222   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:44.609565   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:44.609602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:44.624787   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:44.624826   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:44.707388   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:44.707410   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:44.707422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:47.283951   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:47.297013   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:47.297080   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:47.331979   70480 cri.go:89] found id: ""
	I0729 11:49:47.332009   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.332018   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:47.332023   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:47.332071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:47.367888   70480 cri.go:89] found id: ""
	I0729 11:49:47.367914   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.367925   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:47.367931   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:47.367991   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:47.409364   70480 cri.go:89] found id: ""
	I0729 11:49:47.409392   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.409404   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:47.409410   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:47.409462   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:47.442554   70480 cri.go:89] found id: ""
	I0729 11:49:47.442583   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.442594   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:47.442602   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:47.442656   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:47.476662   70480 cri.go:89] found id: ""
	I0729 11:49:47.476692   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.476704   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:47.476713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:47.476775   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:47.513779   70480 cri.go:89] found id: ""
	I0729 11:49:47.513809   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.513819   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:47.513827   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:47.513885   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:47.550023   70480 cri.go:89] found id: ""
	I0729 11:49:47.550047   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.550053   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:47.550059   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:47.550120   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:47.586131   70480 cri.go:89] found id: ""
	I0729 11:49:47.586157   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.586165   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:47.586174   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:47.586187   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:47.671326   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:47.671365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:47.710573   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:47.710601   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:47.763248   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:47.763284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:47.779516   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:47.779545   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:47.857474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:50.358275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:50.371438   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:50.371501   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:50.408776   70480 cri.go:89] found id: ""
	I0729 11:49:50.408803   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.408813   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:50.408820   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:50.408881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:50.443502   70480 cri.go:89] found id: ""
	I0729 11:49:50.443528   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.443536   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:50.443541   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:50.443600   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:50.480429   70480 cri.go:89] found id: ""
	I0729 11:49:50.480454   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.480463   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:50.480470   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:50.480525   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:50.518753   70480 cri.go:89] found id: ""
	I0729 11:49:50.518779   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.518789   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:50.518796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:50.518838   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:50.555970   70480 cri.go:89] found id: ""
	I0729 11:49:50.556000   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.556010   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:50.556022   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:50.556086   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:50.592342   70480 cri.go:89] found id: ""
	I0729 11:49:50.592374   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.592385   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:50.592392   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:50.592458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:50.628772   70480 cri.go:89] found id: ""
	I0729 11:49:50.628801   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.628813   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:50.628859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:50.628919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:50.677549   70480 cri.go:89] found id: ""
	I0729 11:49:50.677579   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.677588   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:50.677598   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:50.677612   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:50.734543   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:50.734579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:50.749418   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:50.749445   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:50.825728   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:50.825754   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:50.825773   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:50.901579   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:50.901615   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.439920   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:53.453322   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:53.453381   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:53.491594   70480 cri.go:89] found id: ""
	I0729 11:49:53.491622   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.491632   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:53.491638   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:53.491698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:53.527160   70480 cri.go:89] found id: ""
	I0729 11:49:53.527188   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.527201   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:53.527207   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:53.527264   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:53.563787   70480 cri.go:89] found id: ""
	I0729 11:49:53.563819   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.563830   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:53.563838   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:53.563899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:53.601551   70480 cri.go:89] found id: ""
	I0729 11:49:53.601575   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.601583   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:53.601589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:53.601634   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:53.636711   70480 cri.go:89] found id: ""
	I0729 11:49:53.636738   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.636748   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:53.636755   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:53.636824   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:53.677809   70480 cri.go:89] found id: ""
	I0729 11:49:53.677852   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.677864   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:53.677872   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:53.677932   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:53.714548   70480 cri.go:89] found id: ""
	I0729 11:49:53.714579   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.714590   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:53.714597   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:53.714663   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:53.751406   70480 cri.go:89] found id: ""
	I0729 11:49:53.751438   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.751448   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:53.751459   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:53.751474   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:53.834905   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:53.834942   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.880818   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:53.880852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:53.935913   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:53.935948   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:53.950053   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:53.950078   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:54.027378   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.528379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:56.541859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:56.541930   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:56.580566   70480 cri.go:89] found id: ""
	I0729 11:49:56.580612   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.580621   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:56.580629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:56.580687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:56.616390   70480 cri.go:89] found id: ""
	I0729 11:49:56.616419   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.616427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:56.616433   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:56.616483   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:56.653245   70480 cri.go:89] found id: ""
	I0729 11:49:56.653273   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.653281   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:56.653286   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:56.653345   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:56.693011   70480 cri.go:89] found id: ""
	I0729 11:49:56.693033   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.693041   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:56.693047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:56.693115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:56.729684   70480 cri.go:89] found id: ""
	I0729 11:49:56.729714   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.729723   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:56.729736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:56.729799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:56.764634   70480 cri.go:89] found id: ""
	I0729 11:49:56.764675   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.764684   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:56.764692   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:56.764753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:56.800672   70480 cri.go:89] found id: ""
	I0729 11:49:56.800703   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.800714   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:56.800721   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:56.800784   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:56.838727   70480 cri.go:89] found id: ""
	I0729 11:49:56.838758   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.838769   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:56.838781   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:56.838794   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:56.918017   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.918043   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:56.918057   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:57.011900   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:57.011951   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:57.055320   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:57.055350   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:57.113681   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:57.113725   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:59.629516   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:59.643794   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:59.643872   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:59.682543   70480 cri.go:89] found id: ""
	I0729 11:49:59.682571   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.682580   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:59.682586   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:59.682649   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:59.719313   70480 cri.go:89] found id: ""
	I0729 11:49:59.719341   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.719352   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:59.719360   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:59.719415   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:59.759567   70480 cri.go:89] found id: ""
	I0729 11:49:59.759593   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.759603   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:59.759611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:59.759668   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:59.796159   70480 cri.go:89] found id: ""
	I0729 11:49:59.796180   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.796187   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:59.796192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:59.796247   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:59.836178   70480 cri.go:89] found id: ""
	I0729 11:49:59.836199   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.836207   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:59.836212   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:59.836263   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:59.876751   70480 cri.go:89] found id: ""
	I0729 11:49:59.876783   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.876795   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:59.876802   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:59.876863   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:59.916163   70480 cri.go:89] found id: ""
	I0729 11:49:59.916196   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.916207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:59.916217   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:59.916281   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:59.953581   70480 cri.go:89] found id: ""
	I0729 11:49:59.953611   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.953621   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:59.953631   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:59.953649   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:00.009128   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:00.009167   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:00.024681   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:00.024710   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:00.098939   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:00.098966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:00.098980   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:00.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:00.186125   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:02.726382   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:02.740727   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:02.740799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:02.779624   70480 cri.go:89] found id: ""
	I0729 11:50:02.779653   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.779664   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:02.779672   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:02.779731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:02.816044   70480 cri.go:89] found id: ""
	I0729 11:50:02.816076   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.816087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:02.816094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:02.816168   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:02.859406   70480 cri.go:89] found id: ""
	I0729 11:50:02.859434   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.859445   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:02.859453   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:02.859514   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:02.897019   70480 cri.go:89] found id: ""
	I0729 11:50:02.897049   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.897058   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:02.897064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:02.897123   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:02.936810   70480 cri.go:89] found id: ""
	I0729 11:50:02.936843   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.936854   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:02.936860   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:02.936919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:02.973391   70480 cri.go:89] found id: ""
	I0729 11:50:02.973412   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.973420   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:02.973426   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:02.973485   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:03.010867   70480 cri.go:89] found id: ""
	I0729 11:50:03.010961   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.010983   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:03.011001   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:03.011082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:03.048287   70480 cri.go:89] found id: ""
	I0729 11:50:03.048321   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.048332   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:03.048343   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:03.048360   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:03.102752   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:03.102790   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:03.117732   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:03.117759   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:03.193620   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:03.193643   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:03.193655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:03.277205   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:03.277250   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:05.833546   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:05.849129   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:05.849191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:05.893059   70480 cri.go:89] found id: ""
	I0729 11:50:05.893094   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.893105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:05.893113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:05.893182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:05.927844   70480 cri.go:89] found id: ""
	I0729 11:50:05.927879   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.927889   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:05.927896   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:05.927960   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:05.964427   70480 cri.go:89] found id: ""
	I0729 11:50:05.964451   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.964458   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:05.964464   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:05.964509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:06.001963   70480 cri.go:89] found id: ""
	I0729 11:50:06.001995   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.002002   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:06.002008   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:06.002055   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:06.038838   70480 cri.go:89] found id: ""
	I0729 11:50:06.038869   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.038880   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:06.038888   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:06.038948   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:06.073952   70480 cri.go:89] found id: ""
	I0729 11:50:06.073985   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.073995   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:06.074003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:06.074063   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:06.110495   70480 cri.go:89] found id: ""
	I0729 11:50:06.110524   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.110535   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:06.110541   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:06.110603   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:06.146851   70480 cri.go:89] found id: ""
	I0729 11:50:06.146893   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.146904   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:06.146915   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:06.146931   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:06.201779   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:06.201814   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:06.216407   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:06.216434   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:06.294362   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:06.294382   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:06.294394   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:06.374381   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:06.374415   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:08.920326   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:08.933875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:08.933940   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:08.974589   70480 cri.go:89] found id: ""
	I0729 11:50:08.974615   70480 logs.go:276] 0 containers: []
	W0729 11:50:08.974623   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:08.974629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:08.974691   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:09.013032   70480 cri.go:89] found id: ""
	I0729 11:50:09.013056   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.013066   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:09.013075   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:09.013121   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:09.052365   70480 cri.go:89] found id: ""
	I0729 11:50:09.052390   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.052397   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:09.052402   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:09.052450   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:09.098671   70480 cri.go:89] found id: ""
	I0729 11:50:09.098719   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.098731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:09.098739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:09.098799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:09.148033   70480 cri.go:89] found id: ""
	I0729 11:50:09.148062   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.148083   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:09.148091   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:09.148151   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:09.195140   70480 cri.go:89] found id: ""
	I0729 11:50:09.195172   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.195179   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:09.195185   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:09.195244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:09.242315   70480 cri.go:89] found id: ""
	I0729 11:50:09.242346   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.242356   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:09.242364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:09.242428   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:09.278315   70480 cri.go:89] found id: ""
	I0729 11:50:09.278342   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.278353   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:09.278364   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:09.278377   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:09.327622   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:09.327654   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:09.342383   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:09.342416   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:09.420797   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:09.420821   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:09.420832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:09.499308   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:09.499345   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:12.042649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:12.057927   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:12.057996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:12.096129   70480 cri.go:89] found id: ""
	I0729 11:50:12.096159   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.096170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:12.096177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:12.096244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:12.133849   70480 cri.go:89] found id: ""
	I0729 11:50:12.133880   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.133891   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:12.133898   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:12.133963   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:12.171706   70480 cri.go:89] found id: ""
	I0729 11:50:12.171730   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.171738   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:12.171744   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:12.171810   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:12.211248   70480 cri.go:89] found id: ""
	I0729 11:50:12.211285   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.211307   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:12.211315   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:12.211379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:12.247472   70480 cri.go:89] found id: ""
	I0729 11:50:12.247500   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.247510   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:12.247517   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:12.247578   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:12.283818   70480 cri.go:89] found id: ""
	I0729 11:50:12.283847   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.283859   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:12.283866   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:12.283937   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:12.324453   70480 cri.go:89] found id: ""
	I0729 11:50:12.324478   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.324485   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:12.324490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:12.324541   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:12.362504   70480 cri.go:89] found id: ""
	I0729 11:50:12.362531   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.362538   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:12.362546   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:12.362558   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:12.439250   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:12.439278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:12.439295   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:12.521240   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:12.521271   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:12.561881   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:12.561918   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:12.615509   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:12.615548   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:15.130823   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:15.151388   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:15.151457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:15.209596   70480 cri.go:89] found id: ""
	I0729 11:50:15.209645   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.209658   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:15.209668   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:15.209736   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:15.248351   70480 cri.go:89] found id: ""
	I0729 11:50:15.248383   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.248394   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:15.248402   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:15.248459   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:15.287261   70480 cri.go:89] found id: ""
	I0729 11:50:15.287288   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.287296   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:15.287301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:15.287356   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:15.325197   70480 cri.go:89] found id: ""
	I0729 11:50:15.325221   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.325229   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:15.325234   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:15.325292   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:15.361903   70480 cri.go:89] found id: ""
	I0729 11:50:15.361930   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.361939   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:15.361944   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:15.361994   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:15.399963   70480 cri.go:89] found id: ""
	I0729 11:50:15.399996   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.400007   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:15.400015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:15.400079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:15.437366   70480 cri.go:89] found id: ""
	I0729 11:50:15.437400   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.437409   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:15.437414   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:15.437476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:15.473784   70480 cri.go:89] found id: ""
	I0729 11:50:15.473810   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.473827   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:15.473837   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:15.473852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:15.550294   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:15.550328   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:15.550343   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:15.636252   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:15.636297   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:15.681361   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:15.681397   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:15.735415   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:15.735451   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.250767   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:18.265247   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:18.265319   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:18.302793   70480 cri.go:89] found id: ""
	I0729 11:50:18.302819   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.302827   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:18.302833   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:18.302894   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:18.345507   70480 cri.go:89] found id: ""
	I0729 11:50:18.345541   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.345551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:18.345558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:18.345621   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:18.381646   70480 cri.go:89] found id: ""
	I0729 11:50:18.381675   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.381682   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:18.381688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:18.381750   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:18.417233   70480 cri.go:89] found id: ""
	I0729 11:50:18.417261   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.417268   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:18.417275   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:18.417340   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:18.453506   70480 cri.go:89] found id: ""
	I0729 11:50:18.453534   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.453541   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:18.453547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:18.453598   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:18.491886   70480 cri.go:89] found id: ""
	I0729 11:50:18.491910   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.491918   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:18.491923   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:18.491980   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:18.528413   70480 cri.go:89] found id: ""
	I0729 11:50:18.528444   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.528454   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:18.528462   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:18.528518   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:18.565621   70480 cri.go:89] found id: ""
	I0729 11:50:18.565653   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.565663   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:18.565673   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:18.565690   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:18.616796   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:18.616832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.631175   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:18.631202   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:18.712480   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:18.712506   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:18.712520   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:18.797246   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:18.797284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:21.344260   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:21.358689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:21.358781   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:21.393248   70480 cri.go:89] found id: ""
	I0729 11:50:21.393276   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.393286   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:21.393293   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:21.393352   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:21.426960   70480 cri.go:89] found id: ""
	I0729 11:50:21.426989   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.426999   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:21.427007   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:21.427066   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:21.463521   70480 cri.go:89] found id: ""
	I0729 11:50:21.463545   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.463553   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:21.463559   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:21.463612   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:21.501915   70480 cri.go:89] found id: ""
	I0729 11:50:21.501950   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.501960   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:21.501966   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:21.502023   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:21.538208   70480 cri.go:89] found id: ""
	I0729 11:50:21.538247   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.538258   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:21.538265   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:21.538327   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:21.572572   70480 cri.go:89] found id: ""
	I0729 11:50:21.572594   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.572602   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:21.572607   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:21.572664   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:21.608002   70480 cri.go:89] found id: ""
	I0729 11:50:21.608037   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.608046   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:21.608053   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:21.608103   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:21.643039   70480 cri.go:89] found id: ""
	I0729 11:50:21.643069   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.643080   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:21.643098   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:21.643115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:21.722921   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:21.722960   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:21.768597   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:21.768628   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:21.826974   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:21.827009   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:21.842214   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:21.842242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:21.913217   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:24.413629   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:24.428364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:24.428458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:24.469631   70480 cri.go:89] found id: ""
	I0729 11:50:24.469654   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.469661   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:24.469667   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:24.469712   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:24.508201   70480 cri.go:89] found id: ""
	I0729 11:50:24.508231   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.508242   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:24.508254   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:24.508317   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:24.543992   70480 cri.go:89] found id: ""
	I0729 11:50:24.544020   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.544028   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:24.544034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:24.544082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:24.579956   70480 cri.go:89] found id: ""
	I0729 11:50:24.579983   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.579990   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:24.579995   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:24.580051   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:24.616233   70480 cri.go:89] found id: ""
	I0729 11:50:24.616259   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.616267   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:24.616273   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:24.616339   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:24.655131   70480 cri.go:89] found id: ""
	I0729 11:50:24.655159   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.655167   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:24.655173   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:24.655223   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:24.701706   70480 cri.go:89] found id: ""
	I0729 11:50:24.701730   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.701738   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:24.701743   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:24.701799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:24.739729   70480 cri.go:89] found id: ""
	I0729 11:50:24.739754   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.739762   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:24.739773   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:24.739785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:24.817347   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:24.817390   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:24.858248   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:24.858274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:24.911486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:24.911527   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:24.927180   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:24.927209   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:25.007474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:27.507887   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:27.521936   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:27.522004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:27.555832   70480 cri.go:89] found id: ""
	I0729 11:50:27.555865   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.555875   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:27.555882   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:27.555944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:27.590482   70480 cri.go:89] found id: ""
	I0729 11:50:27.590509   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.590518   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:27.590526   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:27.590587   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:27.626998   70480 cri.go:89] found id: ""
	I0729 11:50:27.627028   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.627038   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:27.627045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:27.627105   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:27.661980   70480 cri.go:89] found id: ""
	I0729 11:50:27.662015   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.662027   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:27.662034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:27.662096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:27.696646   70480 cri.go:89] found id: ""
	I0729 11:50:27.696675   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.696683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:27.696689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:27.696735   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:27.739393   70480 cri.go:89] found id: ""
	I0729 11:50:27.739421   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.739432   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:27.739439   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:27.739500   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:27.774985   70480 cri.go:89] found id: ""
	I0729 11:50:27.775013   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.775027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:27.775034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:27.775097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:27.811526   70480 cri.go:89] found id: ""
	I0729 11:50:27.811555   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.811567   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:27.811578   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:27.811594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:27.866445   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:27.866482   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:27.881961   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:27.881991   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:27.949524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:27.949543   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:27.949555   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:28.029386   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:28.029418   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:30.571850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:30.586163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:30.586241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:30.622065   70480 cri.go:89] found id: ""
	I0729 11:50:30.622096   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.622105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:30.622113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:30.622189   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:30.659346   70480 cri.go:89] found id: ""
	I0729 11:50:30.659386   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.659398   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:30.659405   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:30.659467   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:30.699376   70480 cri.go:89] found id: ""
	I0729 11:50:30.699403   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.699413   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:30.699421   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:30.699490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:30.734843   70480 cri.go:89] found id: ""
	I0729 11:50:30.734873   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.734892   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:30.734900   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:30.734974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:30.772983   70480 cri.go:89] found id: ""
	I0729 11:50:30.773010   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.773021   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:30.773028   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:30.773084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:30.814777   70480 cri.go:89] found id: ""
	I0729 11:50:30.814805   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.814815   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:30.814823   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:30.814891   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:30.851989   70480 cri.go:89] found id: ""
	I0729 11:50:30.852018   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.852027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:30.852036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:30.852094   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:30.891692   70480 cri.go:89] found id: ""
	I0729 11:50:30.891716   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.891732   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:30.891743   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:30.891758   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:30.943466   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:30.943498   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:30.957182   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:30.957208   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:31.029695   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:31.029717   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:31.029731   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:31.113329   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:31.113378   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:33.654275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:33.668509   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:33.668581   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:33.706103   70480 cri.go:89] found id: ""
	I0729 11:50:33.706133   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.706144   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:33.706151   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:33.706203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:33.743389   70480 cri.go:89] found id: ""
	I0729 11:50:33.743417   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.743424   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:33.743431   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:33.743482   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:33.781980   70480 cri.go:89] found id: ""
	I0729 11:50:33.782014   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.782025   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:33.782032   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:33.782092   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:33.818049   70480 cri.go:89] found id: ""
	I0729 11:50:33.818080   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.818090   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:33.818098   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:33.818164   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:33.854043   70480 cri.go:89] found id: ""
	I0729 11:50:33.854069   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.854077   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:33.854083   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:33.854144   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:33.891292   70480 cri.go:89] found id: ""
	I0729 11:50:33.891319   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.891329   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:33.891338   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:33.891400   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:33.927871   70480 cri.go:89] found id: ""
	I0729 11:50:33.927904   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.927915   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:33.927922   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:33.927979   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:33.964136   70480 cri.go:89] found id: ""
	I0729 11:50:33.964163   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.964170   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:33.964181   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:33.964195   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:34.015262   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:34.015292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:34.029677   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:34.029711   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:34.105907   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:34.105932   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:34.105945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:34.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:34.186120   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:36.740552   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:36.754472   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:36.754533   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:36.793126   70480 cri.go:89] found id: ""
	I0729 11:50:36.793162   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.793170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:36.793178   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:36.793235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:36.831095   70480 cri.go:89] found id: ""
	I0729 11:50:36.831152   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.831166   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:36.831176   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:36.831235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:36.869236   70480 cri.go:89] found id: ""
	I0729 11:50:36.869266   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.869277   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:36.869284   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:36.869343   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:36.907163   70480 cri.go:89] found id: ""
	I0729 11:50:36.907195   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.907203   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:36.907209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:36.907267   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:36.943074   70480 cri.go:89] found id: ""
	I0729 11:50:36.943101   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.943110   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:36.943115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:36.943177   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:36.980411   70480 cri.go:89] found id: ""
	I0729 11:50:36.980432   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.980442   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:36.980449   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:36.980509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:37.017994   70480 cri.go:89] found id: ""
	I0729 11:50:37.018015   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.018028   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:37.018035   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:37.018091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:37.052761   70480 cri.go:89] found id: ""
	I0729 11:50:37.052787   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.052797   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:37.052806   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:37.052818   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:37.105925   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:37.105970   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:37.119829   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:37.119862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:37.198953   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:37.198992   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:37.199013   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:37.276947   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:37.276987   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:39.816196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:39.830387   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:39.830460   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:39.864879   70480 cri.go:89] found id: ""
	I0729 11:50:39.864914   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.864921   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:39.864927   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:39.864974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:39.902730   70480 cri.go:89] found id: ""
	I0729 11:50:39.902761   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.902772   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:39.902779   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:39.902832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:39.937630   70480 cri.go:89] found id: ""
	I0729 11:50:39.937656   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.937663   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:39.937669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:39.937718   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:39.972692   70480 cri.go:89] found id: ""
	I0729 11:50:39.972723   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.972731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:39.972736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:39.972798   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:40.016152   70480 cri.go:89] found id: ""
	I0729 11:50:40.016179   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.016187   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:40.016192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:40.016239   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:40.051206   70480 cri.go:89] found id: ""
	I0729 11:50:40.051233   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.051243   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:40.051249   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:40.051310   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:40.091007   70480 cri.go:89] found id: ""
	I0729 11:50:40.091039   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.091050   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:40.091057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:40.091122   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:40.130939   70480 cri.go:89] found id: ""
	I0729 11:50:40.130968   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.130979   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:40.130992   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:40.131011   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:40.210551   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:40.210579   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:40.210594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:40.292853   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:40.292889   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:40.333337   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:40.333365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:40.383219   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:40.383254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:42.898275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:42.913287   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:42.913365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:42.948662   70480 cri.go:89] found id: ""
	I0729 11:50:42.948696   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.948709   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:42.948716   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:42.948768   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:42.989511   70480 cri.go:89] found id: ""
	I0729 11:50:42.989541   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.989551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:42.989558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:42.989609   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:43.025987   70480 cri.go:89] found id: ""
	I0729 11:50:43.026013   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.026021   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:43.026026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:43.026082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:43.062210   70480 cri.go:89] found id: ""
	I0729 11:50:43.062243   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.062253   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:43.062271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:43.062344   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:43.104967   70480 cri.go:89] found id: ""
	I0729 11:50:43.104990   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.104997   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:43.105003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:43.105081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:43.146433   70480 cri.go:89] found id: ""
	I0729 11:50:43.146467   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.146479   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:43.146487   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:43.146551   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:43.199618   70480 cri.go:89] found id: ""
	I0729 11:50:43.199647   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.199658   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:43.199665   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:43.199721   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:43.238994   70480 cri.go:89] found id: ""
	I0729 11:50:43.239025   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.239036   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:43.239053   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:43.239071   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:43.253185   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:43.253211   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:43.325381   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:43.325399   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:43.325410   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:43.408547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:43.408582   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:43.447251   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:43.447281   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:46.001731   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:46.017006   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:46.017084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:46.056451   70480 cri.go:89] found id: ""
	I0729 11:50:46.056480   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.056492   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:46.056500   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:46.056562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:46.094715   70480 cri.go:89] found id: ""
	I0729 11:50:46.094754   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.094762   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:46.094767   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:46.094817   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:46.131440   70480 cri.go:89] found id: ""
	I0729 11:50:46.131471   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.131483   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:46.131490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:46.131548   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:46.171239   70480 cri.go:89] found id: ""
	I0729 11:50:46.171264   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.171271   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:46.171278   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:46.171331   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:46.208060   70480 cri.go:89] found id: ""
	I0729 11:50:46.208094   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.208102   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:46.208108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:46.208162   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:46.244765   70480 cri.go:89] found id: ""
	I0729 11:50:46.244797   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.244806   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:46.244813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:46.244874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:46.281932   70480 cri.go:89] found id: ""
	I0729 11:50:46.281965   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.281977   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:46.281986   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:46.282058   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:46.319589   70480 cri.go:89] found id: ""
	I0729 11:50:46.319618   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.319629   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:46.319640   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:46.319655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:46.369821   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:46.369859   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:46.384828   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:46.384862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:46.460755   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:46.460780   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:46.460793   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:46.543424   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:46.543459   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.089661   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:49.103781   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:49.103878   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:49.140206   70480 cri.go:89] found id: ""
	I0729 11:50:49.140234   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.140242   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:49.140248   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:49.140306   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:49.181047   70480 cri.go:89] found id: ""
	I0729 11:50:49.181077   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.181087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:49.181094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:49.181160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:49.217106   70480 cri.go:89] found id: ""
	I0729 11:50:49.217135   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.217145   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:49.217152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:49.217213   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:49.256920   70480 cri.go:89] found id: ""
	I0729 11:50:49.256955   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.256966   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:49.256973   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:49.257040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:49.294586   70480 cri.go:89] found id: ""
	I0729 11:50:49.294610   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.294618   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:49.294623   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:49.294687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:49.332441   70480 cri.go:89] found id: ""
	I0729 11:50:49.332467   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.332475   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:49.332480   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:49.332538   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:49.370255   70480 cri.go:89] found id: ""
	I0729 11:50:49.370281   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.370288   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:49.370293   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:49.370348   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:49.407106   70480 cri.go:89] found id: ""
	I0729 11:50:49.407142   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.407150   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:49.407158   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:49.407170   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:49.487262   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:49.487294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.530381   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:49.530407   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:49.585601   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:49.585640   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:49.608868   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:49.608909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:49.717369   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:52.218194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:52.232762   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:52.232830   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:52.269399   70480 cri.go:89] found id: ""
	I0729 11:50:52.269427   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.269435   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:52.269441   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:52.269488   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:52.304375   70480 cri.go:89] found id: ""
	I0729 11:50:52.304405   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.304415   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:52.304421   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:52.304471   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:52.339374   70480 cri.go:89] found id: ""
	I0729 11:50:52.339406   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.339423   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:52.339431   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:52.339490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:52.375675   70480 cri.go:89] found id: ""
	I0729 11:50:52.375704   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.375715   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:52.375724   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:52.375785   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:52.411587   70480 cri.go:89] found id: ""
	I0729 11:50:52.411612   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.411620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:52.411625   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:52.411677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:52.445267   70480 cri.go:89] found id: ""
	I0729 11:50:52.445291   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.445301   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:52.445308   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:52.445367   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:52.491320   70480 cri.go:89] found id: ""
	I0729 11:50:52.491352   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.491361   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:52.491376   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:52.491432   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:52.528158   70480 cri.go:89] found id: ""
	I0729 11:50:52.528195   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.528205   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:52.528214   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:52.528229   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:52.584122   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:52.584156   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:52.598572   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:52.598611   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:52.675433   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:52.675451   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:52.675473   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:52.759393   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:52.759433   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:55.300231   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:55.316902   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:55.316972   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:55.362311   70480 cri.go:89] found id: ""
	I0729 11:50:55.362350   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.362360   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:55.362368   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:55.362434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:55.420481   70480 cri.go:89] found id: ""
	I0729 11:50:55.420506   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.420519   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:55.420524   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:55.420582   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:55.472515   70480 cri.go:89] found id: ""
	I0729 11:50:55.472546   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.472556   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:55.472565   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:55.472625   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:55.513203   70480 cri.go:89] found id: ""
	I0729 11:50:55.513224   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.513232   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:55.513237   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:55.513290   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:55.548410   70480 cri.go:89] found id: ""
	I0729 11:50:55.548440   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.548450   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:55.548457   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:55.548517   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:55.584532   70480 cri.go:89] found id: ""
	I0729 11:50:55.584561   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.584571   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:55.584577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:55.584640   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:55.627593   70480 cri.go:89] found id: ""
	I0729 11:50:55.627623   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.627652   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:55.627660   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:55.627723   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:55.667981   70480 cri.go:89] found id: ""
	I0729 11:50:55.668005   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.668014   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:55.668021   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:55.668050   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:55.721569   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:55.721605   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:55.735570   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:55.735598   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:55.810549   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:55.810578   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:55.810590   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:55.892547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:55.892594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:58.435946   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:58.449628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:58.449693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:58.487449   70480 cri.go:89] found id: ""
	I0729 11:50:58.487481   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.487499   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:58.487507   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:58.487574   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:58.524030   70480 cri.go:89] found id: ""
	I0729 11:50:58.524051   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.524058   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:58.524063   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:58.524118   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:58.560348   70480 cri.go:89] found id: ""
	I0729 11:50:58.560374   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.560381   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:58.560386   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:58.560434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:58.598947   70480 cri.go:89] found id: ""
	I0729 11:50:58.598974   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.598984   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:58.598992   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:58.599050   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:58.634763   70480 cri.go:89] found id: ""
	I0729 11:50:58.634789   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.634799   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:58.634807   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:58.634867   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:58.671609   70480 cri.go:89] found id: ""
	I0729 11:50:58.671639   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.671649   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:58.671656   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:58.671715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:58.712629   70480 cri.go:89] found id: ""
	I0729 11:50:58.712654   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.712661   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:58.712669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:58.712719   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:58.749749   70480 cri.go:89] found id: ""
	I0729 11:50:58.749779   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.749788   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:58.749799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:58.749813   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:58.807124   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:58.807159   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:58.821486   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:58.821513   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:58.889226   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:58.889248   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:58.889263   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:58.968593   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:58.968633   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:01.511112   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:01.525124   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:01.525221   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:01.559415   70480 cri.go:89] found id: ""
	I0729 11:51:01.559450   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.559462   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:01.559469   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:01.559530   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:01.593614   70480 cri.go:89] found id: ""
	I0729 11:51:01.593644   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.593655   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:01.593661   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:01.593722   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:01.632365   70480 cri.go:89] found id: ""
	I0729 11:51:01.632398   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.632409   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:01.632416   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:01.632476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:01.670518   70480 cri.go:89] found id: ""
	I0729 11:51:01.670543   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.670550   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:01.670557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:01.670618   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:01.709732   70480 cri.go:89] found id: ""
	I0729 11:51:01.709755   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.709762   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:01.709768   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:01.709813   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:01.751710   70480 cri.go:89] found id: ""
	I0729 11:51:01.751739   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.751749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:01.751756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:01.751818   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:01.795819   70480 cri.go:89] found id: ""
	I0729 11:51:01.795848   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.795859   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:01.795867   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:01.795931   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:01.838654   70480 cri.go:89] found id: ""
	I0729 11:51:01.838684   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.838691   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:01.838719   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:01.838734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:01.894328   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:01.894370   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:01.908870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:01.908894   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:01.982740   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:01.982770   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:01.982785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:02.068332   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:02.068376   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.614758   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:04.629646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:04.629708   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:04.666502   70480 cri.go:89] found id: ""
	I0729 11:51:04.666526   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.666534   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:04.666540   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:04.666590   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:04.705373   70480 cri.go:89] found id: ""
	I0729 11:51:04.705398   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.705407   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:04.705414   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:04.705468   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:04.745076   70480 cri.go:89] found id: ""
	I0729 11:51:04.745110   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.745122   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:04.745130   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:04.745195   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:04.786923   70480 cri.go:89] found id: ""
	I0729 11:51:04.786953   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.786963   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:04.786971   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:04.787031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:04.824483   70480 cri.go:89] found id: ""
	I0729 11:51:04.824514   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.824522   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:04.824530   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:04.824591   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:04.861347   70480 cri.go:89] found id: ""
	I0729 11:51:04.861379   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.861390   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:04.861399   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:04.861456   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:04.902102   70480 cri.go:89] found id: ""
	I0729 11:51:04.902136   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.902143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:04.902149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:04.902206   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:04.942533   70480 cri.go:89] found id: ""
	I0729 11:51:04.942560   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.942568   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:04.942576   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:04.942589   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:04.997272   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:04.997309   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:05.012276   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:05.012307   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:05.088408   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:05.088434   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:05.088462   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:05.173414   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:05.173452   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:07.718649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:07.732209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:07.732282   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:07.767128   70480 cri.go:89] found id: ""
	I0729 11:51:07.767157   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.767164   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:07.767170   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:07.767233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:07.803210   70480 cri.go:89] found id: ""
	I0729 11:51:07.803243   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.803253   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:07.803260   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:07.803323   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:07.839694   70480 cri.go:89] found id: ""
	I0729 11:51:07.839718   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.839726   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:07.839732   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:07.839779   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:07.876660   70480 cri.go:89] found id: ""
	I0729 11:51:07.876687   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.876695   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:07.876701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:07.876758   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:07.923089   70480 cri.go:89] found id: ""
	I0729 11:51:07.923119   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.923128   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:07.923139   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:07.923191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:07.961178   70480 cri.go:89] found id: ""
	I0729 11:51:07.961205   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.961214   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:07.961223   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:07.961283   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:08.001007   70480 cri.go:89] found id: ""
	I0729 11:51:08.001031   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.001038   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:08.001047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:08.001115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:08.036930   70480 cri.go:89] found id: ""
	I0729 11:51:08.036956   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.036964   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:08.036972   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:08.036982   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:08.091405   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:08.091440   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:08.106456   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:08.106483   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:08.181814   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:08.181835   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:08.181846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:08.267663   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:08.267701   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:10.814602   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:10.828290   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:10.828351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:10.866374   70480 cri.go:89] found id: ""
	I0729 11:51:10.866398   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.866406   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:10.866412   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:10.866457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:10.908253   70480 cri.go:89] found id: ""
	I0729 11:51:10.908286   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.908295   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:10.908301   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:10.908370   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:10.952686   70480 cri.go:89] found id: ""
	I0729 11:51:10.952709   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.952717   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:10.952723   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:10.952771   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:10.991621   70480 cri.go:89] found id: ""
	I0729 11:51:10.991650   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.991661   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:10.991668   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:10.991728   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:11.028420   70480 cri.go:89] found id: ""
	I0729 11:51:11.028451   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.028462   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:11.028469   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:11.028520   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:11.065213   70480 cri.go:89] found id: ""
	I0729 11:51:11.065248   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.065259   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:11.065266   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:11.065328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:11.104008   70480 cri.go:89] found id: ""
	I0729 11:51:11.104051   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.104064   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:11.104073   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:11.104134   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:11.140872   70480 cri.go:89] found id: ""
	I0729 11:51:11.140913   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.140925   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:11.140936   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:11.140958   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:11.222498   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:11.222535   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:11.265869   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:11.265909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:11.319889   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:11.319926   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:11.334069   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:11.334100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:11.412461   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:13.913194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:13.927057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:13.927141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:13.963267   70480 cri.go:89] found id: ""
	I0729 11:51:13.963302   70480 logs.go:276] 0 containers: []
	W0729 11:51:13.963312   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:13.963321   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:13.963386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:14.002591   70480 cri.go:89] found id: ""
	I0729 11:51:14.002621   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.002633   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:14.002640   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:14.002737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:14.041377   70480 cri.go:89] found id: ""
	I0729 11:51:14.041410   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.041422   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:14.041437   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:14.041502   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:14.081794   70480 cri.go:89] found id: ""
	I0729 11:51:14.081821   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.081829   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:14.081835   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:14.081888   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:14.121223   70480 cri.go:89] found id: ""
	I0729 11:51:14.121251   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.121261   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:14.121269   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:14.121333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:14.156762   70480 cri.go:89] found id: ""
	I0729 11:51:14.156798   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.156808   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:14.156817   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:14.156881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:14.195068   70480 cri.go:89] found id: ""
	I0729 11:51:14.195098   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.195108   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:14.195115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:14.195185   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:14.233361   70480 cri.go:89] found id: ""
	I0729 11:51:14.233392   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.233402   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:14.233413   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:14.233428   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:14.289276   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:14.289318   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:14.304505   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:14.304536   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:14.376648   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:14.376673   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:14.376685   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:14.453538   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:14.453574   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:17.004150   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:17.018324   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:17.018401   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:17.056923   70480 cri.go:89] found id: ""
	I0729 11:51:17.056959   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.056970   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:17.056978   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:17.057042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:17.093329   70480 cri.go:89] found id: ""
	I0729 11:51:17.093362   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.093374   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:17.093381   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:17.093443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:17.130340   70480 cri.go:89] found id: ""
	I0729 11:51:17.130372   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.130382   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:17.130391   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:17.130457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:17.164877   70480 cri.go:89] found id: ""
	I0729 11:51:17.164902   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.164910   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:17.164915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:17.164962   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:17.201509   70480 cri.go:89] found id: ""
	I0729 11:51:17.201538   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.201549   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:17.201555   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:17.201629   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:17.242097   70480 cri.go:89] found id: ""
	I0729 11:51:17.242121   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.242130   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:17.242136   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:17.242182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:17.277239   70480 cri.go:89] found id: ""
	I0729 11:51:17.277262   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.277270   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:17.277279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:17.277328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:17.310991   70480 cri.go:89] found id: ""
	I0729 11:51:17.311022   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.311034   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:17.311046   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:17.311061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:17.386672   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:17.386718   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:17.424880   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:17.424905   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:17.477226   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:17.477255   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:17.492327   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:17.492354   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:17.570971   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.071148   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:20.085734   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:20.085821   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:20.120455   70480 cri.go:89] found id: ""
	I0729 11:51:20.120500   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.120512   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:20.120530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:20.120589   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:20.165159   70480 cri.go:89] found id: ""
	I0729 11:51:20.165186   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.165193   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:20.165199   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:20.165245   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:20.202147   70480 cri.go:89] found id: ""
	I0729 11:51:20.202168   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.202175   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:20.202182   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:20.202237   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:20.237783   70480 cri.go:89] found id: ""
	I0729 11:51:20.237811   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.237822   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:20.237829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:20.237890   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:20.280812   70480 cri.go:89] found id: ""
	I0729 11:51:20.280839   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.280852   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:20.280858   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:20.280922   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:20.317904   70480 cri.go:89] found id: ""
	I0729 11:51:20.317925   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.317932   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:20.317938   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:20.317986   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:20.356106   70480 cri.go:89] found id: ""
	I0729 11:51:20.356136   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.356143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:20.356149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:20.356197   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:20.393483   70480 cri.go:89] found id: ""
	I0729 11:51:20.393514   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.393526   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:20.393537   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:20.393552   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:20.446650   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:20.446716   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:20.460502   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:20.460531   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:20.535717   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.535738   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:20.535751   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:20.619068   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:20.619119   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.159775   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:23.174688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:23.174776   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:23.210990   70480 cri.go:89] found id: ""
	I0729 11:51:23.211017   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.211025   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:23.211031   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:23.211083   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:23.251468   70480 cri.go:89] found id: ""
	I0729 11:51:23.251494   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.251505   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:23.251512   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:23.251567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:23.288902   70480 cri.go:89] found id: ""
	I0729 11:51:23.288950   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.288961   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:23.288969   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:23.289028   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:23.324544   70480 cri.go:89] found id: ""
	I0729 11:51:23.324583   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.324593   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:23.324604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:23.324681   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:23.363138   70480 cri.go:89] found id: ""
	I0729 11:51:23.363170   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.363180   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:23.363188   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:23.363246   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:23.400107   70480 cri.go:89] found id: ""
	I0729 11:51:23.400136   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.400146   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:23.400163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:23.400224   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:23.437129   70480 cri.go:89] found id: ""
	I0729 11:51:23.437169   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.437180   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:23.437189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:23.437251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:23.470782   70480 cri.go:89] found id: ""
	I0729 11:51:23.470811   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.470821   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:23.470831   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:23.470846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:23.557806   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:23.557843   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.601312   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:23.601342   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:23.658042   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:23.658084   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:23.682844   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:23.682878   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:23.773049   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:26.273263   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:26.286759   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:26.286822   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:26.325303   70480 cri.go:89] found id: ""
	I0729 11:51:26.325330   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.325340   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:26.325347   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:26.325402   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:26.366994   70480 cri.go:89] found id: ""
	I0729 11:51:26.367019   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.367033   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:26.367040   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:26.367100   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:26.407748   70480 cri.go:89] found id: ""
	I0729 11:51:26.407779   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.407789   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:26.407796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:26.407856   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:26.443172   70480 cri.go:89] found id: ""
	I0729 11:51:26.443197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.443206   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:26.443214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:26.443275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:26.479906   70480 cri.go:89] found id: ""
	I0729 11:51:26.479928   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.479937   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:26.479945   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:26.480011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:26.518837   70480 cri.go:89] found id: ""
	I0729 11:51:26.518867   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.518877   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:26.518884   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:26.518939   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:26.557168   70480 cri.go:89] found id: ""
	I0729 11:51:26.557197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.557207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:26.557214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:26.557271   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:26.591670   70480 cri.go:89] found id: ""
	I0729 11:51:26.591699   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.591707   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:26.591715   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:26.591727   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:26.606611   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:26.606641   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:26.675726   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:26.675752   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:26.675768   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:26.755738   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:26.755776   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:26.799482   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:26.799518   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:29.353415   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:29.367062   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:29.367141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:29.404628   70480 cri.go:89] found id: ""
	I0729 11:51:29.404669   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.404677   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:29.404683   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:29.404731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:29.452828   70480 cri.go:89] found id: ""
	I0729 11:51:29.452858   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.452868   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:29.452877   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:29.452936   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:29.488252   70480 cri.go:89] found id: ""
	I0729 11:51:29.488280   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.488288   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:29.488296   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:29.488357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:29.524837   70480 cri.go:89] found id: ""
	I0729 11:51:29.524863   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.524874   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:29.524890   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:29.524938   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:29.564545   70480 cri.go:89] found id: ""
	I0729 11:51:29.564587   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.564598   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:29.564615   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:29.564686   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:29.604254   70480 cri.go:89] found id: ""
	I0729 11:51:29.604282   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.604292   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:29.604299   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:29.604365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:29.640107   70480 cri.go:89] found id: ""
	I0729 11:51:29.640137   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.640147   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:29.640152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:29.640207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:29.677070   70480 cri.go:89] found id: ""
	I0729 11:51:29.677099   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.677109   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:29.677119   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:29.677143   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:29.692434   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:29.692466   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:29.769317   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:29.769344   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:29.769355   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:29.850468   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:29.850505   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:29.890975   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:29.891010   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:32.445320   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:32.459458   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:32.459534   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:32.500629   70480 cri.go:89] found id: ""
	I0729 11:51:32.500652   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.500659   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:32.500664   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:32.500711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:32.538737   70480 cri.go:89] found id: ""
	I0729 11:51:32.538763   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.538771   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:32.538777   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:32.538837   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:32.578093   70480 cri.go:89] found id: ""
	I0729 11:51:32.578124   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.578134   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:32.578141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:32.578200   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:32.615501   70480 cri.go:89] found id: ""
	I0729 11:51:32.615527   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.615539   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:32.615547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:32.615597   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:32.653232   70480 cri.go:89] found id: ""
	I0729 11:51:32.653262   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.653272   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:32.653279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:32.653338   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:32.691385   70480 cri.go:89] found id: ""
	I0729 11:51:32.691408   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.691418   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:32.691440   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:32.691486   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:32.727625   70480 cri.go:89] found id: ""
	I0729 11:51:32.727657   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.727667   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:32.727674   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:32.727737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:32.762200   70480 cri.go:89] found id: ""
	I0729 11:51:32.762225   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.762232   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:32.762240   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:32.762251   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:32.815020   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:32.815061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:32.830072   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:32.830100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:32.902248   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:32.902278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:32.902292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:32.984571   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:32.984604   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:35.528850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:35.543568   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:35.543633   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:35.583537   70480 cri.go:89] found id: ""
	I0729 11:51:35.583564   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.583571   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:35.583578   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:35.583632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:35.621997   70480 cri.go:89] found id: ""
	I0729 11:51:35.622021   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.622028   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:35.622034   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:35.622090   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:35.659062   70480 cri.go:89] found id: ""
	I0729 11:51:35.659091   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.659102   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:35.659109   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:35.659169   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:35.696644   70480 cri.go:89] found id: ""
	I0729 11:51:35.696679   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.696689   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:35.696701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:35.696753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:35.734321   70480 cri.go:89] found id: ""
	I0729 11:51:35.734348   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.734358   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:35.734366   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:35.734426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:35.771540   70480 cri.go:89] found id: ""
	I0729 11:51:35.771574   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.771586   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:35.771604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:35.771684   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:35.811293   70480 cri.go:89] found id: ""
	I0729 11:51:35.811318   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.811326   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:35.811332   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:35.811386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:35.857175   70480 cri.go:89] found id: ""
	I0729 11:51:35.857206   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.857217   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:35.857228   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:35.857242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:35.946191   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:35.946210   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:35.946225   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:36.033466   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:36.033501   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:36.072593   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:36.072622   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:36.129193   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:36.129228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:38.645464   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:38.659318   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:38.659378   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:38.697259   70480 cri.go:89] found id: ""
	I0729 11:51:38.697289   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.697298   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:38.697304   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:38.697351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:38.736722   70480 cri.go:89] found id: ""
	I0729 11:51:38.736751   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.736763   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:38.736770   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:38.736828   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:38.773508   70480 cri.go:89] found id: ""
	I0729 11:51:38.773532   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.773539   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:38.773545   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:38.773607   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:38.813147   70480 cri.go:89] found id: ""
	I0729 11:51:38.813177   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.813186   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:38.813193   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:38.813249   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:38.850588   70480 cri.go:89] found id: ""
	I0729 11:51:38.850621   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.850631   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:38.850639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:38.850694   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:38.888260   70480 cri.go:89] found id: ""
	I0729 11:51:38.888293   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.888304   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:38.888313   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:38.888380   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:38.924326   70480 cri.go:89] found id: ""
	I0729 11:51:38.924352   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.924360   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:38.924365   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:38.924426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:38.963356   70480 cri.go:89] found id: ""
	I0729 11:51:38.963386   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.963397   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:38.963408   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:38.963425   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:39.048438   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:39.048472   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:39.087799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:39.087828   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:39.141908   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:39.141945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:39.156242   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:39.156282   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:39.231689   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:41.732860   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:41.752371   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:41.752451   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:41.804525   70480 cri.go:89] found id: ""
	I0729 11:51:41.804553   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.804565   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:41.804575   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:41.804632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:41.859983   70480 cri.go:89] found id: ""
	I0729 11:51:41.860010   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.860018   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:41.860024   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:41.860081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:41.909592   70480 cri.go:89] found id: ""
	I0729 11:51:41.909622   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.909632   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:41.909639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:41.909700   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:41.949892   70480 cri.go:89] found id: ""
	I0729 11:51:41.949919   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.949928   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:41.949933   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:41.950011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:41.985334   70480 cri.go:89] found id: ""
	I0729 11:51:41.985360   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.985368   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:41.985374   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:41.985426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:42.028747   70480 cri.go:89] found id: ""
	I0729 11:51:42.028806   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.028818   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:42.028829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:42.028899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:42.063927   70480 cri.go:89] found id: ""
	I0729 11:51:42.063955   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.063965   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:42.063972   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:42.064031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:42.099539   70480 cri.go:89] found id: ""
	I0729 11:51:42.099566   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.099577   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:42.099587   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:42.099602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:42.112852   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:42.112879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:42.185104   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:42.185130   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:42.185142   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:42.265744   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:42.265778   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:42.309451   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:42.309478   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.862271   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:44.876763   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:44.876832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:44.921157   70480 cri.go:89] found id: ""
	I0729 11:51:44.921188   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.921198   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:44.921206   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:44.921266   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:44.960222   70480 cri.go:89] found id: ""
	I0729 11:51:44.960255   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.960265   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:44.960272   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:44.960334   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:44.999994   70480 cri.go:89] found id: ""
	I0729 11:51:45.000025   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.000036   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:45.000045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:45.000108   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:45.037110   70480 cri.go:89] found id: ""
	I0729 11:51:45.037145   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.037156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:45.037163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:45.037215   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:45.074406   70480 cri.go:89] found id: ""
	I0729 11:51:45.074430   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.074440   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:45.074447   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:45.074508   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:45.113069   70480 cri.go:89] found id: ""
	I0729 11:51:45.113097   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.113107   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:45.113115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:45.113178   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:45.158016   70480 cri.go:89] found id: ""
	I0729 11:51:45.158045   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.158055   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:45.158063   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:45.158115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:45.199257   70480 cri.go:89] found id: ""
	I0729 11:51:45.199286   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.199294   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:45.199303   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:45.199314   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:45.254060   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:45.254100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:45.269592   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:45.269630   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:45.357469   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:45.357493   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:45.357508   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:45.437760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:45.437806   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:47.980407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:47.998789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:47.998874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:48.039278   70480 cri.go:89] found id: ""
	I0729 11:51:48.039311   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.039321   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:48.039328   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:48.039391   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:48.080277   70480 cri.go:89] found id: ""
	I0729 11:51:48.080312   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.080324   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:48.080332   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:48.080395   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:48.119009   70480 cri.go:89] found id: ""
	I0729 11:51:48.119032   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.119039   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:48.119045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:48.119091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:48.162062   70480 cri.go:89] found id: ""
	I0729 11:51:48.162091   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.162101   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:48.162108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:48.162175   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:48.208089   70480 cri.go:89] found id: ""
	I0729 11:51:48.208120   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.208141   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:48.208148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:48.208214   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:48.254174   70480 cri.go:89] found id: ""
	I0729 11:51:48.254206   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.254217   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:48.254225   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:48.254288   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:48.294959   70480 cri.go:89] found id: ""
	I0729 11:51:48.294988   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.294998   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:48.295005   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:48.295067   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:48.337176   70480 cri.go:89] found id: ""
	I0729 11:51:48.337209   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.337221   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:48.337231   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:48.337249   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:48.392826   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:48.392879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:48.409017   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:48.409043   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:48.489964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:48.489995   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:48.490012   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:48.571448   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:48.571496   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:51.124524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:51.137808   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:51.137887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:51.178622   70480 cri.go:89] found id: ""
	I0729 11:51:51.178647   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.178656   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:51.178663   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:51.178738   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:51.214985   70480 cri.go:89] found id: ""
	I0729 11:51:51.215008   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.215015   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:51.215021   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:51.215071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:51.250529   70480 cri.go:89] found id: ""
	I0729 11:51:51.250575   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.250586   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:51.250594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:51.250648   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:51.284745   70480 cri.go:89] found id: ""
	I0729 11:51:51.284774   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.284781   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:51.284787   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:51.284844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:51.319448   70480 cri.go:89] found id: ""
	I0729 11:51:51.319476   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.319486   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:51.319494   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:51.319559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:51.358828   70480 cri.go:89] found id: ""
	I0729 11:51:51.358861   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.358868   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:51.358875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:51.358934   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:51.398326   70480 cri.go:89] found id: ""
	I0729 11:51:51.398356   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.398363   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:51.398369   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:51.398424   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:51.450495   70480 cri.go:89] found id: ""
	I0729 11:51:51.450523   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.450530   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:51.450539   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:51.450549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:51.495351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:51.495381   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:51.545937   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:51.545972   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:51.560738   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:51.560769   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:51.640187   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:51.640209   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:51.640223   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.230801   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:54.244231   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:54.244307   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:54.279319   70480 cri.go:89] found id: ""
	I0729 11:51:54.279349   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.279359   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:54.279366   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:54.279426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:54.316648   70480 cri.go:89] found id: ""
	I0729 11:51:54.316675   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.316685   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:54.316691   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:54.316751   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:54.353605   70480 cri.go:89] found id: ""
	I0729 11:51:54.353631   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.353641   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:54.353646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:54.353705   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:54.389695   70480 cri.go:89] found id: ""
	I0729 11:51:54.389724   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.389734   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:54.389739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:54.389789   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:54.425572   70480 cri.go:89] found id: ""
	I0729 11:51:54.425610   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.425620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:54.425627   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:54.425693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:54.466041   70480 cri.go:89] found id: ""
	I0729 11:51:54.466084   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.466101   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:54.466111   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:54.466160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:54.504916   70480 cri.go:89] found id: ""
	I0729 11:51:54.504943   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.504950   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:54.504956   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:54.505003   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:54.542093   70480 cri.go:89] found id: ""
	I0729 11:51:54.542133   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.542141   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:54.542149   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:54.542161   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:54.555521   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:54.555549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:54.639080   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:54.639104   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:54.639115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.721817   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:54.721858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:54.760794   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:54.760831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:57.314549   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:57.328865   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:57.328941   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:57.364546   70480 cri.go:89] found id: ""
	I0729 11:51:57.364577   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.364587   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:57.364594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:57.364665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:57.401965   70480 cri.go:89] found id: ""
	I0729 11:51:57.401994   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.402005   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:57.402013   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:57.402072   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:57.436918   70480 cri.go:89] found id: ""
	I0729 11:51:57.436942   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.436975   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:57.436983   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:57.437042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:57.471476   70480 cri.go:89] found id: ""
	I0729 11:51:57.471503   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.471511   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:57.471519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:57.471576   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:57.509934   70480 cri.go:89] found id: ""
	I0729 11:51:57.509962   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.509972   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:57.509980   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:57.510038   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:57.552095   70480 cri.go:89] found id: ""
	I0729 11:51:57.552123   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.552133   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:57.552141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:57.552204   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:57.586486   70480 cri.go:89] found id: ""
	I0729 11:51:57.586507   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.586514   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:57.586519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:57.586580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:57.622708   70480 cri.go:89] found id: ""
	I0729 11:51:57.622737   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.622746   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:57.622757   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:57.622771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:57.637102   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:57.637133   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:57.710960   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:57.710981   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:57.710994   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:57.803522   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:57.803559   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:57.845804   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:57.845838   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:00.398227   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:00.412064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:00.412139   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:00.446661   70480 cri.go:89] found id: ""
	I0729 11:52:00.446716   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.446729   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:00.446741   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:00.446793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:00.482234   70480 cri.go:89] found id: ""
	I0729 11:52:00.482260   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.482270   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:00.482290   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:00.482357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:00.520087   70480 cri.go:89] found id: ""
	I0729 11:52:00.520125   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.520136   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:00.520143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:00.520203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:00.556889   70480 cri.go:89] found id: ""
	I0729 11:52:00.556913   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.556924   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:00.556931   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:00.556996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:00.593521   70480 cri.go:89] found id: ""
	I0729 11:52:00.593559   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.593569   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:00.593577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:00.593644   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:00.631849   70480 cri.go:89] found id: ""
	I0729 11:52:00.631879   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.631889   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:00.631897   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:00.631956   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:00.669206   70480 cri.go:89] found id: ""
	I0729 11:52:00.669235   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.669246   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:00.669254   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:00.669314   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:00.707653   70480 cri.go:89] found id: ""
	I0729 11:52:00.707681   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.707692   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:00.707702   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:00.707720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:00.722275   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:00.722305   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:00.796212   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:00.796240   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:00.796254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:00.882088   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:00.882135   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:00.924217   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:00.924248   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.479929   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:03.493148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:03.493211   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:03.528114   70480 cri.go:89] found id: ""
	I0729 11:52:03.528158   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.528169   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:03.528177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:03.528233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:03.564509   70480 cri.go:89] found id: ""
	I0729 11:52:03.564542   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.564552   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:03.564559   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:03.564628   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:03.599850   70480 cri.go:89] found id: ""
	I0729 11:52:03.599884   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.599897   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:03.599913   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:03.599977   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:03.638988   70480 cri.go:89] found id: ""
	I0729 11:52:03.639020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.639031   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:03.639039   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:03.639112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:03.678544   70480 cri.go:89] found id: ""
	I0729 11:52:03.678572   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.678581   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:03.678589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:03.678651   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:03.717265   70480 cri.go:89] found id: ""
	I0729 11:52:03.717297   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.717307   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:03.717314   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:03.717379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:03.756479   70480 cri.go:89] found id: ""
	I0729 11:52:03.756504   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.756512   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:03.756519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:03.756570   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:03.791992   70480 cri.go:89] found id: ""
	I0729 11:52:03.792020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.792031   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:03.792042   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:03.792056   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:03.881378   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:03.881417   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:03.918866   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:03.918902   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.972005   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:03.972041   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:03.989650   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:03.989680   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:04.069522   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:06.570567   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:06.588524   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:06.588594   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:06.627028   70480 cri.go:89] found id: ""
	I0729 11:52:06.627071   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.627082   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:06.627089   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:06.627154   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:06.669504   70480 cri.go:89] found id: ""
	I0729 11:52:06.669536   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.669548   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:06.669558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:06.669620   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:06.709546   70480 cri.go:89] found id: ""
	I0729 11:52:06.709573   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.709593   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:06.709603   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:06.709661   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:06.745539   70480 cri.go:89] found id: ""
	I0729 11:52:06.745568   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.745577   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:06.745585   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:06.745645   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:06.783924   70480 cri.go:89] found id: ""
	I0729 11:52:06.783960   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.783971   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:06.783978   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:06.784040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:06.819838   70480 cri.go:89] found id: ""
	I0729 11:52:06.819869   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.819879   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:06.819886   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:06.819950   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:06.858057   70480 cri.go:89] found id: ""
	I0729 11:52:06.858085   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.858102   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:06.858110   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:06.858186   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:06.896844   70480 cri.go:89] found id: ""
	I0729 11:52:06.896875   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.896885   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:06.896896   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:06.896911   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:06.953126   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:06.953166   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:06.967548   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:06.967579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:07.046716   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:07.046740   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:07.046756   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:07.129264   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:07.129299   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:09.672314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:09.687133   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:09.687202   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:09.728280   70480 cri.go:89] found id: ""
	I0729 11:52:09.728307   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.728316   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:09.728322   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:09.728379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:09.765178   70480 cri.go:89] found id: ""
	I0729 11:52:09.765214   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.765225   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:09.765233   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:09.765293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:09.801181   70480 cri.go:89] found id: ""
	I0729 11:52:09.801216   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.801225   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:09.801233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:09.801294   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:09.845088   70480 cri.go:89] found id: ""
	I0729 11:52:09.845118   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.845129   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:09.845137   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:09.845198   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:09.883874   70480 cri.go:89] found id: ""
	I0729 11:52:09.883907   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.883918   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:09.883925   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:09.883992   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:09.918273   70480 cri.go:89] found id: ""
	I0729 11:52:09.918302   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.918312   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:09.918321   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:09.918386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:09.978461   70480 cri.go:89] found id: ""
	I0729 11:52:09.978487   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.978494   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:09.978500   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:09.978546   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:10.022219   70480 cri.go:89] found id: ""
	I0729 11:52:10.022247   70480 logs.go:276] 0 containers: []
	W0729 11:52:10.022255   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:10.022264   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:10.022274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:10.076181   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:10.076228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:10.090567   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:10.090600   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:10.159576   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:10.159605   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:10.159620   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:10.242116   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:10.242165   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:12.784286   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:12.798015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:12.798111   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:12.834920   70480 cri.go:89] found id: ""
	I0729 11:52:12.834951   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.834962   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:12.834969   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:12.835030   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:12.876545   70480 cri.go:89] found id: ""
	I0729 11:52:12.876578   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.876589   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:12.876596   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:12.876665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:12.912912   70480 cri.go:89] found id: ""
	I0729 11:52:12.912937   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.912944   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:12.912950   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:12.913006   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:12.948118   70480 cri.go:89] found id: ""
	I0729 11:52:12.948148   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.948156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:12.948161   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:12.948207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:12.984339   70480 cri.go:89] found id: ""
	I0729 11:52:12.984364   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.984371   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:12.984377   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:12.984433   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:13.024870   70480 cri.go:89] found id: ""
	I0729 11:52:13.024906   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.024916   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:13.024924   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:13.024989   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:13.067951   70480 cri.go:89] found id: ""
	I0729 11:52:13.067988   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.067999   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:13.068007   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:13.068068   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:13.105095   70480 cri.go:89] found id: ""
	I0729 11:52:13.105126   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.105136   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:13.105144   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:13.105158   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:13.167486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:13.167537   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:13.183020   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:13.183046   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:13.264702   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:13.264734   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:13.264750   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:13.346760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:13.346795   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:15.887849   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:15.901560   70480 kubeadm.go:597] duration metric: took 4m4.683363388s to restartPrimaryControlPlane
	W0729 11:52:15.901628   70480 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:15.901659   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:16.377916   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.396247   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.408224   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.420998   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.421024   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.421073   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.432646   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.432712   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.444442   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.455098   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.455175   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.469621   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.482797   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.482869   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.495535   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.508873   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.508967   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.519797   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.590877   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:52:16.590933   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.768006   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.768167   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.768313   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:16.980586   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:16.982617   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:16.982732   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:16.982826   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:16.982935   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:16.983015   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:16.983079   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:16.983127   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:16.983180   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:16.983230   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:16.983291   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:16.983354   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:16.983386   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:16.983433   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.523710   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.622636   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.732508   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.869921   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.892581   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.893213   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.893300   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.049043   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:18.051012   70480 out.go:204]   - Booting up control plane ...
	I0729 11:52:18.051192   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:18.058194   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:18.062340   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:18.063509   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:18.066121   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:52:58.067350   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:52:58.067472   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:52:58.067690   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:03.068218   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:03.068464   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:13.068777   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:13.069011   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:33.069886   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:33.070112   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:54:13.072567   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:13.072841   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:54:13.072861   70480 kubeadm.go:310] 
	I0729 11:54:13.072916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:54:13.072951   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:54:13.072959   70480 kubeadm.go:310] 
	I0729 11:54:13.072987   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:54:13.073016   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:54:13.073156   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:54:13.073181   70480 kubeadm.go:310] 
	I0729 11:54:13.073289   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:54:13.073328   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:54:13.073365   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:54:13.073373   70480 kubeadm.go:310] 
	I0729 11:54:13.073475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:54:13.073562   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:54:13.073572   70480 kubeadm.go:310] 
	I0729 11:54:13.073704   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:54:13.073835   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:54:13.073941   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:54:13.074115   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:54:13.074132   70480 kubeadm.go:310] 
	I0729 11:54:13.074287   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:54:13.074407   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:54:13.074523   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 11:54:13.074665   70480 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 11:54:13.074737   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:54:13.546511   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:54:13.564193   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:54:13.576259   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:54:13.576282   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:54:13.576325   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:54:13.586785   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:54:13.586846   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:54:13.597376   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:54:13.607259   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:54:13.607330   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:54:13.619236   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.630278   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:54:13.630341   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.640574   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:54:13.650526   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:54:13.650584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:54:13.660941   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:54:13.742767   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:54:13.742852   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:54:13.911296   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:54:13.911467   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:54:13.911607   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:54:14.123645   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:54:14.126464   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:54:14.126931   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:54:14.126995   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:54:14.127067   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:54:14.127146   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:54:14.127238   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:54:14.127328   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:54:14.127424   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:54:14.127506   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:54:14.129559   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:54:14.130588   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:54:14.130792   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:54:14.130931   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:54:14.373822   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:54:14.518174   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:54:14.850815   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:54:14.955918   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:54:14.979731   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:54:14.979884   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:54:14.979950   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:54:15.134480   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:54:15.137488   70480 out.go:204]   - Booting up control plane ...
	I0729 11:54:15.137635   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:54:15.150435   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:54:15.151842   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:54:15.152652   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:54:15.155022   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:54:55.157641   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:54:55.157771   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:55.158065   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:00.158857   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:00.159153   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:10.159857   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:30.160336   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:30.160554   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:56:10.159858   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159872   70480 kubeadm.go:310] 
	I0729 11:56:10.159916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:56:10.159968   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:56:10.159976   70480 kubeadm.go:310] 
	I0729 11:56:10.160004   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:56:10.160034   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:56:10.160140   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:56:10.160160   70480 kubeadm.go:310] 
	I0729 11:56:10.160286   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:56:10.160348   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:56:10.160378   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:56:10.160384   70480 kubeadm.go:310] 
	I0729 11:56:10.160475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:56:10.160547   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:56:10.160556   70480 kubeadm.go:310] 
	I0729 11:56:10.160652   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:56:10.160756   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:56:10.160878   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:56:10.160976   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:56:10.160985   70480 kubeadm.go:310] 
	I0729 11:56:10.162126   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:56:10.162224   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:56:10.162399   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 11:56:10.162415   70480 kubeadm.go:394] duration metric: took 7m59.00013135s to StartCluster
	I0729 11:56:10.162473   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:56:10.162606   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:56:10.208183   70480 cri.go:89] found id: ""
	I0729 11:56:10.208206   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.208214   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:56:10.208219   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:56:10.208275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:56:10.256265   70480 cri.go:89] found id: ""
	I0729 11:56:10.256293   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.256304   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:56:10.256445   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:56:10.256515   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:56:10.296625   70480 cri.go:89] found id: ""
	I0729 11:56:10.296668   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.296688   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:56:10.296704   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:56:10.296791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:56:10.333771   70480 cri.go:89] found id: ""
	I0729 11:56:10.333797   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.333824   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:56:10.333830   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:56:10.333887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:56:10.371746   70480 cri.go:89] found id: ""
	I0729 11:56:10.371772   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.371782   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:56:10.371789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:56:10.371850   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:56:10.408988   70480 cri.go:89] found id: ""
	I0729 11:56:10.409018   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.409028   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:56:10.409036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:56:10.409087   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:56:10.448706   70480 cri.go:89] found id: ""
	I0729 11:56:10.448731   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.448740   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:56:10.448749   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:56:10.448808   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:56:10.485549   70480 cri.go:89] found id: ""
	I0729 11:56:10.485577   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.485585   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:56:10.485594   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:56:10.485609   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:56:10.536953   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:56:10.536989   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:56:10.554870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:56:10.554908   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:56:10.675935   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:56:10.675966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:56:10.675983   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:56:10.792820   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:56:10.792858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 11:56:10.833493   70480 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 11:56:10.833537   70480 out.go:239] * 
	* 
	W0729 11:56:10.833616   70480 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.833644   70480 out.go:239] * 
	* 
	W0729 11:56:10.834607   70480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:56:10.838465   70480 out.go:177] 
	W0729 11:56:10.840213   70480 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.840257   70480 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 11:56:10.840282   70480 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 11:56:10.841869   70480 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-188043 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 2 (228.136808ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-188043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-188043 logs -n 25: (1.669569251s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184479 sudo cat                              | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo find                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo crio                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184479                                       | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-574387 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | disable-driver-mounts-574387                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:41 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-297799             | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-297799                                   | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-731235            | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC | 29 Jul 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-754486  | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC | 29 Jul 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-297799                  | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-297799 --memory=2200                     | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC | 29 Jul 24 11:53 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-188043        | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-731235                 | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC | 29 Jul 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754486       | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:53 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-188043             | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:44:35.218497   70480 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:44:35.218591   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218595   70480 out.go:304] Setting ErrFile to fd 2...
	I0729 11:44:35.218599   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218821   70480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:44:35.219357   70480 out.go:298] Setting JSON to false
	I0729 11:44:35.220249   70480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5221,"bootTime":1722248254,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:44:35.220304   70480 start.go:139] virtualization: kvm guest
	I0729 11:44:35.222361   70480 out.go:177] * [old-k8s-version-188043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:44:35.223849   70480 notify.go:220] Checking for updates...
	I0729 11:44:35.223857   70480 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:44:35.225068   70480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:44:35.226167   70480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:44:35.227448   70480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:44:35.228516   70480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:44:35.229771   70480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:44:35.231218   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:44:35.231601   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.231634   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.246457   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0729 11:44:35.246861   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.247335   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.247371   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.247712   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.247899   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.249642   70480 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 11:44:35.250805   70480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:44:35.251114   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.251152   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.265900   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0729 11:44:35.266309   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.266767   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.266794   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.267099   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.267293   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.302182   70480 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:44:35.303575   70480 start.go:297] selected driver: kvm2
	I0729 11:44:35.303593   70480 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.303689   70480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:44:35.304334   70480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.304396   70480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:44:35.319674   70480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:44:35.320023   70480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:44:35.320054   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:44:35.320061   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:44:35.320111   70480 start.go:340] cluster config:
	{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.320210   70480 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.322089   70480 out.go:177] * Starting "old-k8s-version-188043" primary control-plane node in "old-k8s-version-188043" cluster
	I0729 11:44:38.643004   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:35.323173   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:44:35.323208   70480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 11:44:35.323221   70480 cache.go:56] Caching tarball of preloaded images
	I0729 11:44:35.323282   70480 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:44:35.323291   70480 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 11:44:35.323390   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:44:35.323557   70480 start.go:360] acquireMachinesLock for old-k8s-version-188043: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:44:41.714983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:47.794983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:50.867015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:56.946962   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:00.019017   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:06.099000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:09.171008   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:15.250989   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:18.322956   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:24.403015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:27.474951   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:33.554944   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:36.627002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:42.706993   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:45.779000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:51.858998   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:54.931013   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:01.011021   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:04.082938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:10.162988   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:13.235043   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:19.314994   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:22.386953   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:28.467078   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:31.539011   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:37.618990   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:40.690995   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:46.770999   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:49.842938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:55.923002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:58.994960   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:47:01.999190   69907 start.go:364] duration metric: took 3m42.920247555s to acquireMachinesLock for "embed-certs-731235"
	I0729 11:47:01.999237   69907 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:01.999244   69907 fix.go:54] fixHost starting: 
	I0729 11:47:01.999548   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:01.999574   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:02.014481   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I0729 11:47:02.014934   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:02.015374   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:47:02.015392   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:02.015726   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:02.015911   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:02.016062   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:47:02.017570   69907 fix.go:112] recreateIfNeeded on embed-certs-731235: state=Stopped err=<nil>
	I0729 11:47:02.017606   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	W0729 11:47:02.017758   69907 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:02.020459   69907 out.go:177] * Restarting existing kvm2 VM for "embed-certs-731235" ...
	I0729 11:47:02.021770   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Start
	I0729 11:47:02.021904   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring networks are active...
	I0729 11:47:02.022551   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network default is active
	I0729 11:47:02.022943   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network mk-embed-certs-731235 is active
	I0729 11:47:02.023347   69907 main.go:141] libmachine: (embed-certs-731235) Getting domain xml...
	I0729 11:47:02.023972   69907 main.go:141] libmachine: (embed-certs-731235) Creating domain...
	I0729 11:47:03.233906   69907 main.go:141] libmachine: (embed-certs-731235) Waiting to get IP...
	I0729 11:47:03.234807   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.235200   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.235266   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.235191   70997 retry.go:31] will retry after 267.737911ms: waiting for machine to come up
	I0729 11:47:03.504861   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.505460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.505485   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.505418   70997 retry.go:31] will retry after 246.310337ms: waiting for machine to come up
	I0729 11:47:03.753068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.753558   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.753587   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.753520   70997 retry.go:31] will retry after 374.497339ms: waiting for machine to come up
	I0729 11:47:01.996514   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:01.996575   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.996873   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:47:01.996897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.997094   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:47:01.999070   69419 machine.go:97] duration metric: took 4m37.426222817s to provisionDockerMachine
	I0729 11:47:01.999113   69419 fix.go:56] duration metric: took 4m37.448019985s for fixHost
	I0729 11:47:01.999122   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 4m37.448042995s
	W0729 11:47:01.999140   69419 start.go:714] error starting host: provision: host is not running
	W0729 11:47:01.999247   69419 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 11:47:01.999257   69419 start.go:729] Will try again in 5 seconds ...
	I0729 11:47:04.130170   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.130603   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.130625   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.130557   70997 retry.go:31] will retry after 500.810762ms: waiting for machine to come up
	I0729 11:47:04.632773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.633142   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.633196   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.633094   70997 retry.go:31] will retry after 499.805121ms: waiting for machine to come up
	I0729 11:47:05.135101   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.135685   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.135714   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.135610   70997 retry.go:31] will retry after 713.805425ms: waiting for machine to come up
	I0729 11:47:05.850525   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.850950   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.850979   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.850918   70997 retry.go:31] will retry after 940.40593ms: waiting for machine to come up
	I0729 11:47:06.792982   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:06.793406   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:06.793433   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:06.793344   70997 retry.go:31] will retry after 1.216752167s: waiting for machine to come up
	I0729 11:47:08.012264   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:08.012748   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:08.012773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:08.012692   70997 retry.go:31] will retry after 1.729849311s: waiting for machine to come up
	I0729 11:47:07.000812   69419 start.go:360] acquireMachinesLock for no-preload-297799: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:47:09.743735   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:09.744125   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:09.744144   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:09.744101   70997 retry.go:31] will retry after 2.251271574s: waiting for machine to come up
	I0729 11:47:11.998663   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:11.999213   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:11.999255   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:11.999163   70997 retry.go:31] will retry after 2.400718693s: waiting for machine to come up
	I0729 11:47:14.401005   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:14.401419   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:14.401442   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:14.401352   70997 retry.go:31] will retry after 3.073847413s: waiting for machine to come up
	I0729 11:47:17.477026   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:17.477424   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:17.477460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:17.477352   70997 retry.go:31] will retry after 3.28522497s: waiting for machine to come up
	I0729 11:47:22.076091   70231 start.go:364] duration metric: took 3m11.794715554s to acquireMachinesLock for "default-k8s-diff-port-754486"
	I0729 11:47:22.076162   70231 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:22.076177   70231 fix.go:54] fixHost starting: 
	I0729 11:47:22.076605   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:22.076644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:22.096370   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45159
	I0729 11:47:22.096731   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:22.097267   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:47:22.097296   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:22.097603   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:22.097812   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:22.097983   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:47:22.099583   70231 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754486: state=Stopped err=<nil>
	I0729 11:47:22.099607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	W0729 11:47:22.099762   70231 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:22.101982   70231 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754486" ...
	I0729 11:47:20.766989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767519   69907 main.go:141] libmachine: (embed-certs-731235) Found IP for machine: 192.168.61.202
	I0729 11:47:20.767544   69907 main.go:141] libmachine: (embed-certs-731235) Reserving static IP address...
	I0729 11:47:20.767560   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has current primary IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767996   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.768025   69907 main.go:141] libmachine: (embed-certs-731235) DBG | skip adding static IP to network mk-embed-certs-731235 - found existing host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"}
	I0729 11:47:20.768043   69907 main.go:141] libmachine: (embed-certs-731235) Reserved static IP address: 192.168.61.202
	I0729 11:47:20.768060   69907 main.go:141] libmachine: (embed-certs-731235) Waiting for SSH to be available...
	I0729 11:47:20.768068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Getting to WaitForSSH function...
	I0729 11:47:20.770325   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770639   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.770667   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770863   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH client type: external
	I0729 11:47:20.770894   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa (-rw-------)
	I0729 11:47:20.770927   69907 main.go:141] libmachine: (embed-certs-731235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:20.770943   69907 main.go:141] libmachine: (embed-certs-731235) DBG | About to run SSH command:
	I0729 11:47:20.770960   69907 main.go:141] libmachine: (embed-certs-731235) DBG | exit 0
	I0729 11:47:20.895074   69907 main.go:141] libmachine: (embed-certs-731235) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:20.895473   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetConfigRaw
	I0729 11:47:20.896121   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:20.898342   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.898673   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.898717   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.899017   69907 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/config.json ...
	I0729 11:47:20.899239   69907 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:20.899262   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:20.899464   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:20.901688   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902056   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.902099   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902249   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:20.902412   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902579   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902715   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:20.902857   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:20.903102   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:20.903118   69907 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:21.007368   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:21.007403   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007682   69907 buildroot.go:166] provisioning hostname "embed-certs-731235"
	I0729 11:47:21.007708   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007928   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.010883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011268   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.011308   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011465   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.011634   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011779   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011950   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.012121   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.012314   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.012334   69907 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-731235 && echo "embed-certs-731235" | sudo tee /etc/hostname
	I0729 11:47:21.129877   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-731235
	
	I0729 11:47:21.129907   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.133055   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133390   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.133411   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133614   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.133806   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.133977   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.134156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.134317   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.134480   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.134495   69907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-731235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-731235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-731235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:21.247997   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:21.248029   69907 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:21.248056   69907 buildroot.go:174] setting up certificates
	I0729 11:47:21.248067   69907 provision.go:84] configureAuth start
	I0729 11:47:21.248075   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.248361   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:21.251377   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251711   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.251738   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251908   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.254107   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254493   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.254521   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254721   69907 provision.go:143] copyHostCerts
	I0729 11:47:21.254788   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:21.254801   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:21.254896   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:21.255008   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:21.255019   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:21.255058   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:21.255138   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:21.255148   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:21.255183   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:21.255257   69907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.embed-certs-731235 san=[127.0.0.1 192.168.61.202 embed-certs-731235 localhost minikube]
	I0729 11:47:21.398780   69907 provision.go:177] copyRemoteCerts
	I0729 11:47:21.398833   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:21.398858   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.401840   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402259   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.402282   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402483   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.402661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.402992   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.403139   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.484883   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 11:47:21.509042   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:47:21.532327   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:21.556013   69907 provision.go:87] duration metric: took 307.934726ms to configureAuth
	I0729 11:47:21.556040   69907 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:21.556258   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:21.556337   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.558962   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559347   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.559372   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559518   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.559699   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.559861   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.560004   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.560157   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.560337   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.560356   69907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:21.834240   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:21.834270   69907 machine.go:97] duration metric: took 935.015622ms to provisionDockerMachine
	I0729 11:47:21.834284   69907 start.go:293] postStartSetup for "embed-certs-731235" (driver="kvm2")
	I0729 11:47:21.834299   69907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:21.834325   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:21.834638   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:21.834671   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.837313   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837712   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.837751   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837857   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.838022   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.838229   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.838357   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.922275   69907 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:21.926932   69907 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:21.926955   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:21.927027   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:21.927136   69907 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:21.927219   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:21.937122   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:21.964493   69907 start.go:296] duration metric: took 130.192874ms for postStartSetup
	I0729 11:47:21.964533   69907 fix.go:56] duration metric: took 19.965288806s for fixHost
	I0729 11:47:21.964554   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.967318   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967652   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.967682   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967850   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.968066   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968222   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968356   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.968509   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.968717   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.968731   69907 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:22.075873   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253642.050121254
	
	I0729 11:47:22.075893   69907 fix.go:216] guest clock: 1722253642.050121254
	I0729 11:47:22.075900   69907 fix.go:229] Guest: 2024-07-29 11:47:22.050121254 +0000 UTC Remote: 2024-07-29 11:47:21.964537244 +0000 UTC m=+243.027106048 (delta=85.58401ms)
	I0729 11:47:22.075927   69907 fix.go:200] guest clock delta is within tolerance: 85.58401ms
	I0729 11:47:22.075933   69907 start.go:83] releasing machines lock for "embed-certs-731235", held for 20.076714897s
	I0729 11:47:22.075958   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.076265   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:22.079236   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079566   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.079604   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079771   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080311   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080491   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080573   69907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:22.080644   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.080719   69907 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:22.080743   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.083401   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083438   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083743   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083904   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083917   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084061   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084378   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084389   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084565   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084573   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.084691   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.188025   69907 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:22.194866   69907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:22.344382   69907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:22.350719   69907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:22.350809   69907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:22.371783   69907 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:22.371814   69907 start.go:495] detecting cgroup driver to use...
	I0729 11:47:22.371874   69907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:22.387899   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:22.401722   69907 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:22.401790   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:22.415295   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:22.429209   69907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:22.541230   69907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:22.705734   69907 docker.go:233] disabling docker service ...
	I0729 11:47:22.705811   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:22.720716   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:22.736719   69907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:22.865574   69907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:22.994470   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:23.018115   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:23.037125   69907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:23.037210   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.048702   69907 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:23.048768   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.061785   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.074734   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.087639   69907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:23.101010   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.113893   69907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.134264   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.147422   69907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:23.158168   69907 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:23.158220   69907 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:23.175245   69907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:23.190456   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:23.314426   69907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:23.459513   69907 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:23.459584   69907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:23.464829   69907 start.go:563] Will wait 60s for crictl version
	I0729 11:47:23.464899   69907 ssh_runner.go:195] Run: which crictl
	I0729 11:47:23.468768   69907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:23.508694   69907 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:23.508811   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.537048   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.569189   69907 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:23.570566   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:23.573554   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.573918   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:23.573946   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.574198   69907 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:23.578543   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:23.591660   69907 kubeadm.go:883] updating cluster {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:23.591803   69907 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:23.591862   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:23.629355   69907 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:23.629423   69907 ssh_runner.go:195] Run: which lz4
	I0729 11:47:23.633713   69907 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:23.638463   69907 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:23.638491   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:22.103288   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Start
	I0729 11:47:22.103502   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring networks are active...
	I0729 11:47:22.104291   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network default is active
	I0729 11:47:22.104576   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network mk-default-k8s-diff-port-754486 is active
	I0729 11:47:22.105037   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Getting domain xml...
	I0729 11:47:22.105746   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Creating domain...
	I0729 11:47:23.370011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting to get IP...
	I0729 11:47:23.370892   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371318   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.371249   71147 retry.go:31] will retry after 303.24713ms: waiting for machine to come up
	I0729 11:47:23.675985   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676540   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.676486   71147 retry.go:31] will retry after 332.87749ms: waiting for machine to come up
	I0729 11:47:24.010822   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011360   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011388   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.011312   71147 retry.go:31] will retry after 465.260924ms: waiting for machine to come up
	I0729 11:47:24.477939   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478471   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478517   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.478431   71147 retry.go:31] will retry after 501.294487ms: waiting for machine to come up
	I0729 11:47:24.981168   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981736   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.981647   71147 retry.go:31] will retry after 522.082731ms: waiting for machine to come up
	I0729 11:47:25.165725   69907 crio.go:462] duration metric: took 1.532044107s to copy over tarball
	I0729 11:47:25.165811   69907 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:27.422770   69907 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256906507s)
	I0729 11:47:27.422807   69907 crio.go:469] duration metric: took 2.257052359s to extract the tarball
	I0729 11:47:27.422817   69907 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:27.460807   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:27.509129   69907 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:27.509157   69907 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:27.509166   69907 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.30.3 crio true true} ...
	I0729 11:47:27.509281   69907 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-731235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:27.509347   69907 ssh_runner.go:195] Run: crio config
	I0729 11:47:27.560098   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:27.560121   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:27.560133   69907 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:27.560152   69907 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-731235 NodeName:embed-certs-731235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:27.560290   69907 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-731235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:27.560345   69907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:27.570464   69907 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:27.570555   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:27.580535   69907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 11:47:27.598211   69907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:27.615318   69907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 11:47:27.632974   69907 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:27.636858   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:27.649277   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:27.763642   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:27.781529   69907 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235 for IP: 192.168.61.202
	I0729 11:47:27.781556   69907 certs.go:194] generating shared ca certs ...
	I0729 11:47:27.781577   69907 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:27.781758   69907 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:27.781812   69907 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:27.781825   69907 certs.go:256] generating profile certs ...
	I0729 11:47:27.781950   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/client.key
	I0729 11:47:27.782036   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key.6ae4b4bc
	I0729 11:47:27.782091   69907 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key
	I0729 11:47:27.782234   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:27.782278   69907 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:27.782291   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:27.782323   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:27.782358   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:27.782388   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:27.782440   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:27.783361   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:27.813522   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:27.841190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:27.877646   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:27.919310   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 11:47:27.952080   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:47:27.985958   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:28.010190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:28.034756   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:28.059541   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:28.083582   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:28.113030   69907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:28.133424   69907 ssh_runner.go:195] Run: openssl version
	I0729 11:47:28.139250   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:28.150142   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154885   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154934   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.160995   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:28.172031   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:28.184289   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189071   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189132   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.194963   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:28.205555   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:28.216328   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221023   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221091   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.227053   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:28.238044   69907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:28.242748   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:28.248989   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:28.255165   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:28.261178   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:28.266997   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:28.272966   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:28.278994   69907 kubeadm.go:392] StartCluster: {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:28.279100   69907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:28.279142   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.317620   69907 cri.go:89] found id: ""
	I0729 11:47:28.317701   69907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:28.328260   69907 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:28.328285   69907 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:28.328365   69907 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:28.338356   69907 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:28.339293   69907 kubeconfig.go:125] found "embed-certs-731235" server: "https://192.168.61.202:8443"
	I0729 11:47:28.341224   69907 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:28.351166   69907 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0729 11:47:28.351203   69907 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:28.351215   69907 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:28.351271   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.393883   69907 cri.go:89] found id: ""
	I0729 11:47:28.393986   69907 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:28.411298   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:28.421328   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:28.421362   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:28.421406   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:47:28.430665   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:28.430746   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:28.440426   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:47:28.450406   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:28.450466   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:28.460200   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.469699   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:28.469771   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.479855   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:47:28.489251   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:28.489346   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:28.499019   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:28.508770   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:28.644277   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:25.505636   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506255   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:25.506195   71147 retry.go:31] will retry after 748.410801ms: waiting for machine to come up
	I0729 11:47:26.255894   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256293   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:26.256252   71147 retry.go:31] will retry after 1.1735659s: waiting for machine to come up
	I0729 11:47:27.430990   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431494   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:27.431400   71147 retry.go:31] will retry after 1.448031075s: waiting for machine to come up
	I0729 11:47:28.880998   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881483   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:28.881413   71147 retry.go:31] will retry after 1.123855306s: waiting for machine to come up
	I0729 11:47:30.006750   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007231   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007261   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:30.007176   71147 retry.go:31] will retry after 2.180202817s: waiting for machine to come up
	I0729 11:47:30.200484   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.556171661s)
	I0729 11:47:30.200515   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.427523   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.499256   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.603274   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:30.603360   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.104293   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.603524   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.621119   69907 api_server.go:72] duration metric: took 1.01784341s to wait for apiserver process to appear ...
	I0729 11:47:31.621152   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:31.621173   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:31.621755   69907 api_server.go:269] stopped: https://192.168.61.202:8443/healthz: Get "https://192.168.61.202:8443/healthz": dial tcp 192.168.61.202:8443: connect: connection refused
	I0729 11:47:32.121931   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:32.188652   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189149   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189200   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:32.189120   71147 retry.go:31] will retry after 2.231222575s: waiting for machine to come up
	I0729 11:47:34.421672   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422102   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422130   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:34.422062   71147 retry.go:31] will retry after 2.830311758s: waiting for machine to come up
	I0729 11:47:34.187391   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.187427   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.187450   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.199953   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.199994   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.621483   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.639389   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:34.639423   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.121653   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.130808   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.130843   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.621391   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.626072   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.626116   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.122245   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.126823   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.126851   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.621364   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.625781   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.625810   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.121848   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.126505   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:37.126537   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.622175   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.628241   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:47:37.634638   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:37.634668   69907 api_server.go:131] duration metric: took 6.013509305s to wait for apiserver health ...
	I0729 11:47:37.634677   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:37.634684   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:37.636740   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:37.638144   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:37.649816   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:37.670562   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:37.680377   69907 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:37.680408   69907 system_pods.go:61] "coredns-7db6d8ff4d-kwx89" [f2a3fdcb-2794-470e-a1b4-fe264fb5613a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:37.680414   69907 system_pods.go:61] "etcd-embed-certs-731235" [a99bcf99-7242-4383-aa2d-597e817004db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:37.680421   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [302c4cda-07d4-46ec-af59-3339a2b91049] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:37.680426   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [dae9ef32-63c1-4865-9569-ea1f11c9526c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:37.680430   69907 system_pods.go:61] "kube-proxy-hw66r" [97610503-7ca0-4d0c-8d73-249f2a48ef9a] Running
	I0729 11:47:37.680436   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [144902be-bea5-493c-986d-3834c22d82d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:37.680445   69907 system_pods.go:61] "metrics-server-569cc877fc-vqgtm" [75d59d71-3fb3-4383-bd90-3362f6b40694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:37.680449   69907 system_pods.go:61] "storage-provisioner" [24f74df4-0657-481b-9af8-f8b5c94684ea] Running
	I0729 11:47:37.680454   69907 system_pods.go:74] duration metric: took 9.870611ms to wait for pod list to return data ...
	I0729 11:47:37.680460   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:37.683573   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:37.683595   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:37.683607   69907 node_conditions.go:105] duration metric: took 3.142611ms to run NodePressure ...
	I0729 11:47:37.683626   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:37.964162   69907 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968288   69907 kubeadm.go:739] kubelet initialised
	I0729 11:47:37.968308   69907 kubeadm.go:740] duration metric: took 4.123333ms waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968316   69907 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:37.972978   69907 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.977070   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977088   69907 pod_ready.go:81] duration metric: took 4.090197ms for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.977097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977102   69907 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.981499   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981535   69907 pod_ready.go:81] duration metric: took 4.424741ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.981543   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981550   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.986064   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986084   69907 pod_ready.go:81] duration metric: took 4.52445ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.986097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986103   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.254312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254680   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254757   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:37.254658   71147 retry.go:31] will retry after 3.980350875s: waiting for machine to come up
	I0729 11:47:42.625107   70480 start.go:364] duration metric: took 3m7.301517115s to acquireMachinesLock for "old-k8s-version-188043"
	I0729 11:47:42.625180   70480 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:42.625189   70480 fix.go:54] fixHost starting: 
	I0729 11:47:42.625660   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:42.625704   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:42.644136   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
	I0729 11:47:42.644661   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:42.645210   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:47:42.645242   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:42.645570   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:42.645748   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:47:42.645875   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetState
	I0729 11:47:42.647762   70480 fix.go:112] recreateIfNeeded on old-k8s-version-188043: state=Stopped err=<nil>
	I0729 11:47:42.647808   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	W0729 11:47:42.647970   70480 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:42.649883   70480 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-188043" ...
	I0729 11:47:39.992010   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:41.992091   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:43.494150   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.494177   69907 pod_ready.go:81] duration metric: took 5.508061336s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.494186   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500158   69907 pod_ready.go:92] pod "kube-proxy-hw66r" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.500186   69907 pod_ready.go:81] duration metric: took 5.992092ms for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500198   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:41.239616   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240073   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Found IP for machine: 192.168.50.111
	I0729 11:47:41.240103   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has current primary IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240110   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserving static IP address...
	I0729 11:47:41.240474   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.240501   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserved static IP address: 192.168.50.111
	I0729 11:47:41.240529   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | skip adding static IP to network mk-default-k8s-diff-port-754486 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"}
	I0729 11:47:41.240549   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Getting to WaitForSSH function...
	I0729 11:47:41.240567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for SSH to be available...
	I0729 11:47:41.242523   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.242938   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.242970   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.243112   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH client type: external
	I0729 11:47:41.243140   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa (-rw-------)
	I0729 11:47:41.243171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:41.243185   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | About to run SSH command:
	I0729 11:47:41.243198   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | exit 0
	I0729 11:47:41.366827   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:41.367268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetConfigRaw
	I0729 11:47:41.367885   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.370241   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370574   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.370605   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370867   70231 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/config.json ...
	I0729 11:47:41.371157   70231 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:41.371184   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:41.371408   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.374380   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374770   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.374805   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374920   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.375098   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375245   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375362   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.375555   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.375784   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.375801   70231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:41.479220   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:41.479262   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479528   70231 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754486"
	I0729 11:47:41.479555   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479744   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.482542   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.482869   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.482903   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.483074   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.483282   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483442   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483611   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.483828   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.484029   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.484048   70231 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754486 && echo "default-k8s-diff-port-754486" | sudo tee /etc/hostname
	I0729 11:47:41.605605   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754486
	
	I0729 11:47:41.605639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.608313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.608698   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608910   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.609126   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609498   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.609650   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.609845   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.609862   70231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754486/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:41.724183   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:41.724209   70231 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:41.724237   70231 buildroot.go:174] setting up certificates
	I0729 11:47:41.724246   70231 provision.go:84] configureAuth start
	I0729 11:47:41.724256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.724530   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.727462   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.727826   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.727858   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.728009   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.730256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.730683   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730768   70231 provision.go:143] copyHostCerts
	I0729 11:47:41.730822   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:41.730835   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:41.730904   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:41.731016   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:41.731026   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:41.731047   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:41.731151   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:41.731161   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:41.731179   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:41.731238   70231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754486 san=[127.0.0.1 192.168.50.111 default-k8s-diff-port-754486 localhost minikube]
	I0729 11:47:41.930044   70231 provision.go:177] copyRemoteCerts
	I0729 11:47:41.930097   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:41.930127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.932832   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933158   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.933186   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933378   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.933565   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.933723   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.933848   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.016885   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:42.042982   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 11:47:42.067813   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:47:42.092573   70231 provision.go:87] duration metric: took 368.315812ms to configureAuth
	I0729 11:47:42.092601   70231 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:42.092761   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:42.092829   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.095761   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096177   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.096223   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096349   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.096571   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096751   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096891   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.097056   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.097234   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.097251   70231 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:42.378448   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:42.378478   70231 machine.go:97] duration metric: took 1.007302295s to provisionDockerMachine
	I0729 11:47:42.378495   70231 start.go:293] postStartSetup for "default-k8s-diff-port-754486" (driver="kvm2")
	I0729 11:47:42.378511   70231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:42.378541   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.378917   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:42.378950   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.382127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382539   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.382567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382759   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.382958   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.383171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.383297   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.467524   70231 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:42.471793   70231 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:42.471815   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:42.471873   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:42.471948   70231 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:42.472033   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:42.482148   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:42.507312   70231 start.go:296] duration metric: took 128.801138ms for postStartSetup
	I0729 11:47:42.507358   70231 fix.go:56] duration metric: took 20.43118839s for fixHost
	I0729 11:47:42.507384   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.510309   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510737   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.510769   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510948   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.511195   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511373   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511537   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.511694   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.511844   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.511853   70231 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:42.624913   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253662.599486483
	
	I0729 11:47:42.624946   70231 fix.go:216] guest clock: 1722253662.599486483
	I0729 11:47:42.624960   70231 fix.go:229] Guest: 2024-07-29 11:47:42.599486483 +0000 UTC Remote: 2024-07-29 11:47:42.507363501 +0000 UTC m=+212.369750509 (delta=92.122982ms)
	I0729 11:47:42.624988   70231 fix.go:200] guest clock delta is within tolerance: 92.122982ms
	I0729 11:47:42.625005   70231 start.go:83] releasing machines lock for "default-k8s-diff-port-754486", held for 20.548870778s
	I0729 11:47:42.625050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.625322   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:42.628299   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.628799   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.628834   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.629011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629659   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629860   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629950   70231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:42.629997   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.630087   70231 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:42.630106   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.633122   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633432   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633464   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.633504   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633890   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.633973   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.634044   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.634088   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.634312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.634387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634489   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.634906   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.635039   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.746128   70231 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:42.754711   70231 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:42.906989   70231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:42.913975   70231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:42.914035   70231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:42.931503   70231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:42.931535   70231 start.go:495] detecting cgroup driver to use...
	I0729 11:47:42.931591   70231 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:42.949385   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:42.965940   70231 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:42.965989   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:42.982952   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:43.000214   70231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:43.123333   70231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:43.266557   70231 docker.go:233] disabling docker service ...
	I0729 11:47:43.266637   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:43.282521   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:43.300091   70231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:43.440721   70231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:43.577985   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:43.598070   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:43.620282   70231 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:43.620343   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.633918   70231 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:43.634064   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.644931   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.660559   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.676307   70231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:43.687970   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.699659   70231 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.718571   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.729820   70231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:43.739921   70231 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:43.740010   70231 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:43.755562   70231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:43.768161   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:43.899531   70231 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:44.057564   70231 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:44.057649   70231 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:44.062669   70231 start.go:563] Will wait 60s for crictl version
	I0729 11:47:44.062751   70231 ssh_runner.go:195] Run: which crictl
	I0729 11:47:44.066815   70231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:44.104368   70231 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:44.104469   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.133158   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.165813   70231 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:44.167192   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:44.170230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170633   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:44.170664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170908   70231 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:44.175609   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:44.188628   70231 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:44.188748   70231 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:44.188811   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:44.229180   70231 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:44.229255   70231 ssh_runner.go:195] Run: which lz4
	I0729 11:47:44.233985   70231 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:44.238236   70231 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:44.238276   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:42.651248   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .Start
	I0729 11:47:42.651431   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring networks are active...
	I0729 11:47:42.652248   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network default is active
	I0729 11:47:42.652632   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network mk-old-k8s-version-188043 is active
	I0729 11:47:42.653108   70480 main.go:141] libmachine: (old-k8s-version-188043) Getting domain xml...
	I0729 11:47:42.653902   70480 main.go:141] libmachine: (old-k8s-version-188043) Creating domain...
	I0729 11:47:43.961872   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting to get IP...
	I0729 11:47:43.962928   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:43.963321   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:43.963424   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:43.963311   71329 retry.go:31] will retry after 266.698669ms: waiting for machine to come up
	I0729 11:47:44.231914   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.232480   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.232507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.232428   71329 retry.go:31] will retry after 363.884342ms: waiting for machine to come up
	I0729 11:47:44.598046   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.598507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.598530   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.598465   71329 retry.go:31] will retry after 486.214401ms: waiting for machine to come up
	I0729 11:47:45.085968   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.086466   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.086511   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.086440   71329 retry.go:31] will retry after 451.181437ms: waiting for machine to come up
	I0729 11:47:44.508165   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:44.508190   69907 pod_ready.go:81] duration metric: took 1.007982605s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:44.508199   69907 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:46.515466   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:48.515797   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:45.761961   70231 crio.go:462] duration metric: took 1.528001524s to copy over tarball
	I0729 11:47:45.762103   70231 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:48.135637   70231 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.373497372s)
	I0729 11:47:48.135673   70231 crio.go:469] duration metric: took 2.373677697s to extract the tarball
	I0729 11:47:48.135683   70231 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:48.173007   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:48.222120   70231 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:48.222146   70231 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:48.222156   70231 kubeadm.go:934] updating node { 192.168.50.111 8444 v1.30.3 crio true true} ...
	I0729 11:47:48.222294   70231 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:48.222372   70231 ssh_runner.go:195] Run: crio config
	I0729 11:47:48.269094   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:48.269122   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:48.269149   70231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:48.269175   70231 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.111 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754486 NodeName:default-k8s-diff-port-754486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:48.269394   70231 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754486"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:48.269469   70231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:48.282748   70231 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:48.282830   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:48.292857   70231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 11:47:48.312165   70231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:48.332206   70231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 11:47:48.350385   70231 ssh_runner.go:195] Run: grep 192.168.50.111	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:48.354603   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:48.368166   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:48.505072   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:48.525399   70231 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486 for IP: 192.168.50.111
	I0729 11:47:48.525436   70231 certs.go:194] generating shared ca certs ...
	I0729 11:47:48.525457   70231 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:48.525622   70231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:48.525678   70231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:48.525691   70231 certs.go:256] generating profile certs ...
	I0729 11:47:48.525783   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/client.key
	I0729 11:47:48.525863   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key.0ed2faa3
	I0729 11:47:48.525927   70231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key
	I0729 11:47:48.526076   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:48.526124   70231 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:48.526138   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:48.526169   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:48.526211   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:48.526241   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:48.526289   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:48.527026   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:48.567953   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:48.605538   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:48.639615   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:48.678439   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 11:47:48.722664   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:47:48.757436   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:48.797241   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:48.825666   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:48.856344   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:48.882046   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:48.909963   70231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:48.928513   70231 ssh_runner.go:195] Run: openssl version
	I0729 11:47:48.934467   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:48.945606   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950533   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950585   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.957222   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:48.969043   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:48.981101   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986095   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986161   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.992153   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:49.004358   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:49.016204   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021070   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021131   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.027503   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:49.038545   70231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:49.043602   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:49.050327   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:49.056648   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:49.063624   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:49.071491   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:49.080125   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:49.086622   70231 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:49.086771   70231 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:49.086845   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.131483   70231 cri.go:89] found id: ""
	I0729 11:47:49.131580   70231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:49.143222   70231 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:49.143246   70231 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:49.143296   70231 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:49.155447   70231 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:49.156410   70231 kubeconfig.go:125] found "default-k8s-diff-port-754486" server: "https://192.168.50.111:8444"
	I0729 11:47:49.158477   70231 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:49.171515   70231 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.111
	I0729 11:47:49.171546   70231 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:49.171558   70231 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:49.171614   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.218584   70231 cri.go:89] found id: ""
	I0729 11:47:49.218656   70231 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:49.237934   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:49.249188   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:49.249213   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:49.249276   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:47:49.260033   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:49.260100   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:49.270588   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:47:49.280326   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:49.280422   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:49.291754   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.301918   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:49.302005   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.312861   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:47:49.323545   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:49.323614   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:49.335556   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:49.347161   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:49.467792   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:45.538886   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.539448   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.539474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.539387   71329 retry.go:31] will retry after 730.083235ms: waiting for machine to come up
	I0729 11:47:46.270923   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.271428   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.271457   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.271383   71329 retry.go:31] will retry after 699.351116ms: waiting for machine to come up
	I0729 11:47:46.973033   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.973661   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.973689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.973606   71329 retry.go:31] will retry after 804.992714ms: waiting for machine to come up
	I0729 11:47:47.780217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:47.780676   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:47.780727   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:47.780651   71329 retry.go:31] will retry after 1.242092613s: waiting for machine to come up
	I0729 11:47:49.024835   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:49.025362   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:49.025384   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:49.025314   71329 retry.go:31] will retry after 1.505477936s: waiting for machine to come up
	I0729 11:47:51.014115   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:53.015922   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:50.213363   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.427510   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.489221   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.574558   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:50.574648   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.075420   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.574892   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.612604   70231 api_server.go:72] duration metric: took 1.038045496s to wait for apiserver process to appear ...
	I0729 11:47:51.612635   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:51.612656   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:51.613131   70231 api_server.go:269] stopped: https://192.168.50.111:8444/healthz: Get "https://192.168.50.111:8444/healthz": dial tcp 192.168.50.111:8444: connect: connection refused
	I0729 11:47:52.113045   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.008828   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.008861   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.008877   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.080000   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.080047   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.113269   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.123263   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.123301   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:50.532816   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:50.533325   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:50.533354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:50.533285   71329 retry.go:31] will retry after 1.877421564s: waiting for machine to come up
	I0729 11:47:52.413018   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:52.413474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:52.413500   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:52.413405   71329 retry.go:31] will retry after 2.017909532s: waiting for machine to come up
	I0729 11:47:54.432996   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:54.433478   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:54.433506   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:54.433444   71329 retry.go:31] will retry after 2.469357423s: waiting for machine to come up
	I0729 11:47:55.612793   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.617264   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.617299   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.112811   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.119382   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:56.119410   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.612944   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.617383   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:47:56.623760   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:56.623786   70231 api_server.go:131] duration metric: took 5.011145377s to wait for apiserver health ...
	I0729 11:47:56.623795   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:56.623801   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:56.625608   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:55.018201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:57.514432   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.626901   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:56.638585   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:56.661631   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:56.671881   70231 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:56.671908   70231 system_pods.go:61] "coredns-7db6d8ff4d-d4frq" [e495bc30-3c10-4d07-b488-4dbe9b0bfb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:56.671916   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [de3378a8-9a12-4c4b-a6e6-61b19950d5a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:56.671924   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [36c2cd1b-d9de-463e-b343-661d5f14f4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:56.671934   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [6239a1ee-9f7d-4d9b-9d70-5659c7b08fbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:56.671941   70231 system_pods.go:61] "kube-proxy-4bbt5" [4e672275-1afe-4f11-80e2-62aa220e9994] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:47:56.671947   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [81b7d1ed-0163-43fb-8111-048d48efa13c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:56.671954   70231 system_pods.go:61] "metrics-server-569cc877fc-v94xq" [a34d0cd0-1049-4cb4-ae4b-d0c8d34fda13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:56.671959   70231 system_pods.go:61] "storage-provisioner" [a10d68bf-f23d-4871-9041-1e66aa089342] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:47:56.671967   70231 system_pods.go:74] duration metric: took 10.316696ms to wait for pod list to return data ...
	I0729 11:47:56.671974   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:56.677342   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:56.677368   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:56.677380   70231 node_conditions.go:105] duration metric: took 5.400925ms to run NodePressure ...
	I0729 11:47:56.677400   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:56.985230   70231 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990270   70231 kubeadm.go:739] kubelet initialised
	I0729 11:47:56.990297   70231 kubeadm.go:740] duration metric: took 5.038002ms waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990307   70231 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:56.995626   70231 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.002678   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002729   70231 pod_ready.go:81] duration metric: took 7.079039ms for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.002742   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002749   70231 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.007474   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007500   70231 pod_ready.go:81] duration metric: took 4.741617ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.007510   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007516   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.012437   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012464   70231 pod_ready.go:81] duration metric: took 4.941759ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.012474   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012480   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.065060   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065103   70231 pod_ready.go:81] duration metric: took 52.614137ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.065124   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065133   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465390   70231 pod_ready.go:92] pod "kube-proxy-4bbt5" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:57.465414   70231 pod_ready.go:81] duration metric: took 400.26956ms for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465423   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:59.475067   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.904201   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:56.904719   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:56.904754   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:56.904677   71329 retry.go:31] will retry after 4.224733299s: waiting for machine to come up
	I0729 11:48:02.473126   69419 start.go:364] duration metric: took 55.472263119s to acquireMachinesLock for "no-preload-297799"
	I0729 11:48:02.473181   69419 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:48:02.473195   69419 fix.go:54] fixHost starting: 
	I0729 11:48:02.473581   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:48:02.473611   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:48:02.491458   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0729 11:48:02.491939   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:48:02.492393   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:48:02.492411   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:48:02.492790   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:48:02.492983   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:02.493133   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:48:02.494640   69419 fix.go:112] recreateIfNeeded on no-preload-297799: state=Stopped err=<nil>
	I0729 11:48:02.494666   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	W0729 11:48:02.494878   69419 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:48:02.497014   69419 out.go:177] * Restarting existing kvm2 VM for "no-preload-297799" ...
	I0729 11:47:59.514514   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.515573   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.516078   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.132270   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132669   70480 main.go:141] libmachine: (old-k8s-version-188043) Found IP for machine: 192.168.72.61
	I0729 11:48:01.132695   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has current primary IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132702   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserving static IP address...
	I0729 11:48:01.133140   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.133173   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | skip adding static IP to network mk-old-k8s-version-188043 - found existing host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"}
	I0729 11:48:01.133189   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserved static IP address: 192.168.72.61
	I0729 11:48:01.133203   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting for SSH to be available...
	I0729 11:48:01.133217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Getting to WaitForSSH function...
	I0729 11:48:01.135427   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135759   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.135786   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135866   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH client type: external
	I0729 11:48:01.135894   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa (-rw-------)
	I0729 11:48:01.135931   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:01.135949   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | About to run SSH command:
	I0729 11:48:01.135986   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | exit 0
	I0729 11:48:01.262756   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:01.263165   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetConfigRaw
	I0729 11:48:01.263828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.266322   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266662   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.266689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266930   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:48:01.267115   70480 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:01.267132   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:01.267357   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.269973   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270331   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.270354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270502   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.270686   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270863   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.271183   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.271391   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.271405   70480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:01.383463   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:01.383492   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383771   70480 buildroot.go:166] provisioning hostname "old-k8s-version-188043"
	I0729 11:48:01.383795   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.387076   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387411   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.387449   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387583   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.387776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.387929   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.388052   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.388237   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.388396   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.388409   70480 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-188043 && echo "old-k8s-version-188043" | sudo tee /etc/hostname
	I0729 11:48:01.519219   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-188043
	
	I0729 11:48:01.519252   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.521972   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522356   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.522385   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522533   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.522755   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.522955   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.523074   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.523276   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.523452   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.523470   70480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-188043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-188043/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-188043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:01.644387   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:01.644421   70480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:01.644468   70480 buildroot.go:174] setting up certificates
	I0729 11:48:01.644480   70480 provision.go:84] configureAuth start
	I0729 11:48:01.644499   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.644781   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.647721   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648133   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.648162   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648322   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.650422   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.650857   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.650883   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.651028   70480 provision.go:143] copyHostCerts
	I0729 11:48:01.651088   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:01.651101   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:01.651160   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:01.651249   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:01.651257   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:01.651277   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:01.651329   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:01.651336   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:01.651352   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:01.651408   70480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-188043 san=[127.0.0.1 192.168.72.61 localhost minikube old-k8s-version-188043]
	I0729 11:48:01.754387   70480 provision.go:177] copyRemoteCerts
	I0729 11:48:01.754442   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:01.754468   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.757420   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.757770   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.757803   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.758031   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.758220   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.758416   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.758574   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:01.845306   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:01.872221   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 11:48:01.897935   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:01.924742   70480 provision.go:87] duration metric: took 280.248018ms to configureAuth
	I0729 11:48:01.924780   70480 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:01.924957   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:48:01.925042   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.927450   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.927780   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.927949   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.927956   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.928160   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928344   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928511   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.928677   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.928831   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.928850   70480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:02.213344   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:02.213375   70480 machine.go:97] duration metric: took 946.247614ms to provisionDockerMachine
	I0729 11:48:02.213405   70480 start.go:293] postStartSetup for "old-k8s-version-188043" (driver="kvm2")
	I0729 11:48:02.213422   70480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:02.213469   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.213811   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:02.213869   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.216897   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217219   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.217253   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217388   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.217603   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.217776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.217957   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.305894   70480 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:02.310875   70480 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:02.310907   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:02.310982   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:02.311105   70480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:02.311264   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:02.321616   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:02.349732   70480 start.go:296] duration metric: took 136.310898ms for postStartSetup
	I0729 11:48:02.349788   70480 fix.go:56] duration metric: took 19.724598498s for fixHost
	I0729 11:48:02.349828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.352855   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353194   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.353226   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353348   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.353575   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353802   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353983   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.354172   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:02.354428   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:02.354445   70480 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:02.472983   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253682.444336856
	
	I0729 11:48:02.473008   70480 fix.go:216] guest clock: 1722253682.444336856
	I0729 11:48:02.473017   70480 fix.go:229] Guest: 2024-07-29 11:48:02.444336856 +0000 UTC Remote: 2024-07-29 11:48:02.349793034 +0000 UTC m=+207.164903125 (delta=94.543822ms)
	I0729 11:48:02.473042   70480 fix.go:200] guest clock delta is within tolerance: 94.543822ms
	I0729 11:48:02.473049   70480 start.go:83] releasing machines lock for "old-k8s-version-188043", held for 19.847898477s
	I0729 11:48:02.473077   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.473414   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:02.476555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.476980   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.477010   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.477343   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.477892   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478095   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478187   70480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:02.478230   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.478285   70480 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:02.478331   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.481065   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481223   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481484   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481626   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.481665   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481705   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481845   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.481958   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.482032   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482114   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.482302   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482316   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.482466   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.569312   70480 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:02.593778   70480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:02.744117   70480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:02.752292   70480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:02.752380   70480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:02.774488   70480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:02.774515   70480 start.go:495] detecting cgroup driver to use...
	I0729 11:48:02.774581   70480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:02.793491   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:02.814235   70480 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:02.814294   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:02.832545   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:02.848433   70480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:02.980201   70480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:03.163341   70480 docker.go:233] disabling docker service ...
	I0729 11:48:03.163420   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:03.178758   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:03.197879   70480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:03.331372   70480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:03.462987   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:03.480583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:03.504239   70480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 11:48:03.504312   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.518440   70480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:03.518494   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.532072   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.543435   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.555232   70480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:03.568372   70480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:03.582423   70480 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:03.582482   70480 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:03.601891   70480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:03.614380   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:03.741008   70480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:03.896807   70480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:03.896878   70480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:03.903541   70480 start.go:563] Will wait 60s for crictl version
	I0729 11:48:03.903604   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:03.908408   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:03.952850   70480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:03.952946   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:03.984204   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:04.018650   70480 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 11:48:02.498447   69419 main.go:141] libmachine: (no-preload-297799) Calling .Start
	I0729 11:48:02.498626   69419 main.go:141] libmachine: (no-preload-297799) Ensuring networks are active...
	I0729 11:48:02.499540   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network default is active
	I0729 11:48:02.499967   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network mk-no-preload-297799 is active
	I0729 11:48:02.500446   69419 main.go:141] libmachine: (no-preload-297799) Getting domain xml...
	I0729 11:48:02.501250   69419 main.go:141] libmachine: (no-preload-297799) Creating domain...
	I0729 11:48:03.852498   69419 main.go:141] libmachine: (no-preload-297799) Waiting to get IP...
	I0729 11:48:03.853523   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:03.853951   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:03.854006   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:03.853917   71505 retry.go:31] will retry after 199.060788ms: waiting for machine to come up
	I0729 11:48:04.054348   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.054940   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.054968   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.054888   71505 retry.go:31] will retry after 285.962971ms: waiting for machine to come up
	I0729 11:48:04.342491   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.343050   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.343075   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.343003   71505 retry.go:31] will retry after 363.613745ms: waiting for machine to come up
	I0729 11:48:01.973091   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.972466   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:03.972492   70231 pod_ready.go:81] duration metric: took 6.507061375s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:03.972504   70231 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:04.020067   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:04.023182   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023542   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:04.023571   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023796   70480 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:04.028450   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:04.042324   70480 kubeadm.go:883] updating cluster {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:04.042474   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:48:04.042540   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:04.092644   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:04.092699   70480 ssh_runner.go:195] Run: which lz4
	I0729 11:48:04.096834   70480 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:48:04.101297   70480 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:48:04.101328   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 11:48:05.518740   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:08.014306   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:04.708829   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.709447   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.709480   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.709349   71505 retry.go:31] will retry after 458.384125ms: waiting for machine to come up
	I0729 11:48:05.169214   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.169896   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.169930   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.169845   71505 retry.go:31] will retry after 647.103993ms: waiting for machine to come up
	I0729 11:48:05.818415   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.819017   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.819043   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.818969   71505 retry.go:31] will retry after 857.973397ms: waiting for machine to come up
	I0729 11:48:06.678181   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:06.678732   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:06.678756   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:06.678668   71505 retry.go:31] will retry after 928.705904ms: waiting for machine to come up
	I0729 11:48:07.609326   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:07.609866   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:07.609890   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:07.609822   71505 retry.go:31] will retry after 1.262269934s: waiting for machine to come up
	I0729 11:48:08.874373   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:08.874820   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:08.874850   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:08.874758   71505 retry.go:31] will retry after 1.824043731s: waiting for machine to come up
	I0729 11:48:05.980579   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:07.982513   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:05.929023   70480 crio.go:462] duration metric: took 1.832213096s to copy over tarball
	I0729 11:48:05.929116   70480 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:48:09.096321   70480 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.167180641s)
	I0729 11:48:09.096346   70480 crio.go:469] duration metric: took 3.16729016s to extract the tarball
	I0729 11:48:09.096353   70480 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:48:09.154049   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:09.193067   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:09.193104   70480 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:09.193246   70480 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.193211   70480 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.193280   70480 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 11:48:09.193282   70480 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.193298   70480 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.193217   70480 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.193270   70480 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.193261   70480 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.194797   70480 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194815   70480 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.194846   70480 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.194885   70480 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 11:48:09.194789   70480 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.195165   70480 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.412658   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 11:48:09.423832   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.465209   70480 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 11:48:09.465256   70480 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 11:48:09.465312   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.475896   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 11:48:09.475946   70480 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 11:48:09.475983   70480 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.476028   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.511579   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.511606   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 11:48:09.547233   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 11:48:09.554773   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.566944   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.567736   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.574369   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.587705   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.650871   70480 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 11:48:09.650920   70480 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.650969   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692828   70480 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 11:48:09.692876   70480 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 11:48:09.692913   70480 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.692884   70480 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.692968   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692989   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.711985   70480 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 11:48:09.712027   70480 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.712073   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.713847   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.713883   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.713918   70480 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 11:48:09.713950   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.713959   70480 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.713997   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.718719   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.820952   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 11:48:09.820988   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 11:48:09.821051   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.821058   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 11:48:09.821142   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 11:48:09.861235   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 11:48:10.107019   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:10.014549   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.016206   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.701733   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:10.702238   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:10.702283   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:10.702199   71505 retry.go:31] will retry after 2.128592394s: waiting for machine to come up
	I0729 11:48:12.832803   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:12.833342   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:12.833364   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:12.833290   71505 retry.go:31] will retry after 2.45224359s: waiting for machine to come up
	I0729 11:48:10.479461   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.482426   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:14.978814   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.251097   70480 cache_images.go:92] duration metric: took 1.057971213s to LoadCachedImages
	W0729 11:48:10.251194   70480 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0729 11:48:10.251210   70480 kubeadm.go:934] updating node { 192.168.72.61 8443 v1.20.0 crio true true} ...
	I0729 11:48:10.251341   70480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-188043 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:10.251447   70480 ssh_runner.go:195] Run: crio config
	I0729 11:48:10.310573   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:48:10.310594   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:10.310601   70480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:10.310618   70480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.61 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-188043 NodeName:old-k8s-version-188043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 11:48:10.310786   70480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-188043"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:10.310847   70480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 11:48:10.321378   70480 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:10.321463   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:10.331285   70480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 11:48:10.348814   70480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:48:10.368593   70480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 11:48:10.387619   70480 ssh_runner.go:195] Run: grep 192.168.72.61	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:10.392096   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:10.405499   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:10.532293   70480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:10.549883   70480 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043 for IP: 192.168.72.61
	I0729 11:48:10.549910   70480 certs.go:194] generating shared ca certs ...
	I0729 11:48:10.549930   70480 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:10.550110   70480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:10.550166   70480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:10.550180   70480 certs.go:256] generating profile certs ...
	I0729 11:48:10.550299   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.key
	I0729 11:48:10.550376   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4
	I0729 11:48:10.550428   70480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key
	I0729 11:48:10.550564   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:10.550604   70480 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:10.550617   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:10.550648   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:10.550678   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:10.550730   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:10.550787   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:10.551421   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:10.588571   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:10.644840   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:10.697757   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:10.755085   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 11:48:10.800901   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:10.834650   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:10.866662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:10.895657   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:10.923565   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:10.948662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:10.976094   70480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:10.998391   70480 ssh_runner.go:195] Run: openssl version
	I0729 11:48:11.006024   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:11.019080   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024428   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024498   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.031468   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:11.043919   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:11.055933   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061208   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061284   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.067590   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:11.079323   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:11.091404   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096768   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096838   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.103571   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:11.116286   70480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:11.121569   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:11.128426   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:11.134679   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:11.141595   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:11.148605   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:11.155402   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:11.162290   70480 kubeadm.go:392] StartCluster: {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:11.162394   70480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:11.162441   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.206880   70480 cri.go:89] found id: ""
	I0729 11:48:11.206962   70480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:11.218154   70480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:11.218187   70480 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:11.218253   70480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:11.228734   70480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:11.229980   70480 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:48:11.230942   70480 kubeconfig.go:62] /home/jenkins/minikube-integration/19337-3845/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-188043" cluster setting kubeconfig missing "old-k8s-version-188043" context setting]
	I0729 11:48:11.231876   70480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:11.347256   70480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:11.358515   70480 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.61
	I0729 11:48:11.358654   70480 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:11.358668   70480 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:11.358753   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.402396   70480 cri.go:89] found id: ""
	I0729 11:48:11.402470   70480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:11.420623   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:11.431963   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:11.431989   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:11.432060   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:11.442517   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:11.442584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:11.454407   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:11.465534   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:11.465607   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:11.477716   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.489553   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:11.489625   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.501776   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:11.514863   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:11.514931   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:11.526206   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:11.536583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:11.674846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:12.763352   70480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.088463809s)
	I0729 11:48:12.763396   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.039246   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.168621   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.295910   70480 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:13.296008   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:13.797084   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.296523   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.796520   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.515092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:17.014806   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.287937   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:15.288420   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:15.288447   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:15.288378   71505 retry.go:31] will retry after 2.298011171s: waiting for machine to come up
	I0729 11:48:17.587882   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:17.588283   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:17.588317   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:17.588242   71505 retry.go:31] will retry after 3.770149633s: waiting for machine to come up
	I0729 11:48:16.979006   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:18.979673   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.296251   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:15.796738   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.296361   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.796755   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.296237   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.796188   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.296099   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.796433   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.296864   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.796931   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.514721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.515056   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.515218   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.363217   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363766   69419 main.go:141] libmachine: (no-preload-297799) Found IP for machine: 192.168.39.120
	I0729 11:48:21.363823   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has current primary IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363832   69419 main.go:141] libmachine: (no-preload-297799) Reserving static IP address...
	I0729 11:48:21.364272   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.364319   69419 main.go:141] libmachine: (no-preload-297799) DBG | skip adding static IP to network mk-no-preload-297799 - found existing host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"}
	I0729 11:48:21.364334   69419 main.go:141] libmachine: (no-preload-297799) Reserved static IP address: 192.168.39.120
	I0729 11:48:21.364351   69419 main.go:141] libmachine: (no-preload-297799) Waiting for SSH to be available...
	I0729 11:48:21.364386   69419 main.go:141] libmachine: (no-preload-297799) DBG | Getting to WaitForSSH function...
	I0729 11:48:21.366601   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.366955   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.366998   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.367110   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH client type: external
	I0729 11:48:21.367157   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa (-rw-------)
	I0729 11:48:21.367203   69419 main.go:141] libmachine: (no-preload-297799) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:21.367222   69419 main.go:141] libmachine: (no-preload-297799) DBG | About to run SSH command:
	I0729 11:48:21.367233   69419 main.go:141] libmachine: (no-preload-297799) DBG | exit 0
	I0729 11:48:21.494963   69419 main.go:141] libmachine: (no-preload-297799) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:21.495323   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetConfigRaw
	I0729 11:48:21.495901   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.498624   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499005   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.499033   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499332   69419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/config.json ...
	I0729 11:48:21.499542   69419 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:21.499561   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:21.499749   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.501857   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502237   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.502259   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502360   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.502527   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502693   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502852   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.503009   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.503209   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.503226   69419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:21.614994   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:21.615026   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615271   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:48:21.615299   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615483   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.617734   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618050   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.618082   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618192   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.618378   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618539   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618640   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.618818   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.619004   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.619019   69419 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-297799 && echo "no-preload-297799" | sudo tee /etc/hostname
	I0729 11:48:21.747538   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-297799
	
	I0729 11:48:21.747567   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.750275   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750618   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.750649   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750791   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.751003   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751179   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751302   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.751508   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.751695   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.751716   69419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-297799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-297799/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-297799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:21.877638   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:21.877665   69419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:21.877688   69419 buildroot.go:174] setting up certificates
	I0729 11:48:21.877699   69419 provision.go:84] configureAuth start
	I0729 11:48:21.877710   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.877988   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.880318   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880703   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.880730   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880918   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.883184   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883495   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.883525   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883645   69419 provision.go:143] copyHostCerts
	I0729 11:48:21.883693   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:21.883702   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:21.883757   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:21.883845   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:21.883852   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:21.883872   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:21.883925   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:21.883932   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:21.883948   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:21.883992   69419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.no-preload-297799 san=[127.0.0.1 192.168.39.120 localhost minikube no-preload-297799]
	I0729 11:48:22.283775   69419 provision.go:177] copyRemoteCerts
	I0729 11:48:22.283828   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:22.283854   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.286584   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.286954   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.286981   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.287114   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.287333   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.287503   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.287643   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.373551   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:22.401345   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 11:48:22.427243   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:22.452826   69419 provision.go:87] duration metric: took 575.112676ms to configureAuth
	I0729 11:48:22.452864   69419 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:22.453068   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:48:22.453140   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.455748   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456205   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.456237   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456444   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.456664   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456824   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456980   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.457113   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.457317   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.457340   69419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:22.736637   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:22.736667   69419 machine.go:97] duration metric: took 1.237111694s to provisionDockerMachine
	I0729 11:48:22.736682   69419 start.go:293] postStartSetup for "no-preload-297799" (driver="kvm2")
	I0729 11:48:22.736697   69419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:22.736716   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.737054   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:22.737080   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.739895   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740266   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.740299   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740437   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.740660   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.740810   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.740981   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.825483   69419 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:22.829745   69419 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:22.829765   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:22.829844   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:22.829961   69419 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:22.830063   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:22.839702   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:22.864154   69419 start.go:296] duration metric: took 127.451011ms for postStartSetup
	I0729 11:48:22.864200   69419 fix.go:56] duration metric: took 20.391004348s for fixHost
	I0729 11:48:22.864225   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.867047   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867522   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.867547   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867685   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.867897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868100   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868278   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.868442   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.868619   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.868634   69419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:22.979862   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253702.953940258
	
	I0729 11:48:22.979883   69419 fix.go:216] guest clock: 1722253702.953940258
	I0729 11:48:22.979892   69419 fix.go:229] Guest: 2024-07-29 11:48:22.953940258 +0000 UTC Remote: 2024-07-29 11:48:22.864205522 +0000 UTC m=+358.454662216 (delta=89.734736ms)
	I0729 11:48:22.979909   69419 fix.go:200] guest clock delta is within tolerance: 89.734736ms
	I0729 11:48:22.979916   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 20.506763382s
	I0729 11:48:22.979934   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.980178   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:22.983034   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983379   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.983407   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983569   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984174   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984345   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984440   69419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:22.984481   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.984593   69419 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:22.984620   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.987121   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987251   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987503   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987530   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987631   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987653   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987657   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987846   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.987853   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987984   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.988013   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988070   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988193   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.988190   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:23.101778   69419 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:23.108052   69419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:23.255523   69419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:23.261797   69419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:23.261872   69419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:23.279975   69419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:23.280003   69419 start.go:495] detecting cgroup driver to use...
	I0729 11:48:23.280070   69419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:23.296550   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:23.312947   69419 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:23.313014   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:23.327611   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:23.341549   69419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:23.465776   69419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:23.613763   69419 docker.go:233] disabling docker service ...
	I0729 11:48:23.613827   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:23.628485   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:23.641792   69419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:23.775749   69419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:23.912809   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:23.927782   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:23.947081   69419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 11:48:23.947153   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.957920   69419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:23.958002   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.968380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.979429   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.990529   69419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:24.001380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.012490   69419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.031852   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.042914   69419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:24.052901   69419 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:24.052958   69419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:24.065797   69419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:24.075298   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:24.212796   69419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:24.364082   69419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:24.364169   69419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:24.369778   69419 start.go:563] Will wait 60s for crictl version
	I0729 11:48:24.369838   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.373750   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:24.417141   69419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:24.417232   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.447170   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.491940   69419 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 11:48:21.481453   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.482213   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:20.296052   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:20.796633   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.296412   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.797025   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.296524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.796719   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.296741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.796133   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.296709   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.796699   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.515715   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:27.515900   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:24.493306   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:24.495927   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496432   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:24.496479   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496678   69419 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:24.501092   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:24.516305   69419 kubeadm.go:883] updating cluster {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:24.516452   69419 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 11:48:24.516524   69419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:24.558195   69419 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 11:48:24.558221   69419 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:24.558261   69419 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.558295   69419 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.558340   69419 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.558344   69419 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.558377   69419 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 11:48:24.558394   69419 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.558441   69419 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.558359   69419 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 11:48:24.559657   69419 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.559681   69419 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.559700   69419 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.559628   69419 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.559635   69419 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.559896   69419 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.717545   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.722347   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.724891   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.736099   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.738159   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.746232   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 11:48:24.754163   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.781677   69419 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 11:48:24.781726   69419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.781777   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.850443   69419 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 11:48:24.850478   69419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.850527   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.872953   69419 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 11:48:24.872991   69419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.873031   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908765   69419 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 11:48:24.908814   69419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.908869   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908933   69419 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 11:48:24.908969   69419 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.909008   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006764   69419 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 11:48:25.006808   69419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.006862   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006897   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:25.006908   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:25.006942   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:25.006982   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:25.007025   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:25.108737   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 11:48:25.108786   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.108843   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.109411   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109455   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109473   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 11:48:25.109491   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109530   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109543   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:25.124038   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.124154   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.161374   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161395   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 11:48:25.161411   69419 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161435   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161455   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161483   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161495   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161463   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 11:48:25.161532   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 11:48:25.430934   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983350   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (3.821838647s)
	I0729 11:48:28.983392   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 11:48:28.983487   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.822003707s)
	I0729 11:48:28.983512   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 11:48:28.983529   69419 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.552560815s)
	I0729 11:48:28.983541   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983566   69419 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 11:48:28.983600   69419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983615   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983636   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.981755   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:28.481454   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:25.296898   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.796712   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.297094   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.796384   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.296247   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.296890   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.796799   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.296947   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.015895   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:32.537283   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.876700   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.893055249s)
	I0729 11:48:30.876727   69419 ssh_runner.go:235] Completed: which crictl: (1.893072604s)
	I0729 11:48:30.876791   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:30.876737   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 11:48:30.876867   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.876921   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.925907   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 11:48:30.926007   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:32.689310   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.812361674s)
	I0729 11:48:32.689348   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 11:48:32.689380   69419 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689330   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.763306985s)
	I0729 11:48:32.689433   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689437   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 11:48:30.979444   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:33.480260   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.297007   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.797055   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.296172   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.796379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.296834   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.796689   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.296129   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.796275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.297038   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.014380   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:37.015050   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:34.662663   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.973206225s)
	I0729 11:48:34.662715   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 11:48:34.662742   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:34.662794   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:36.619459   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.956638565s)
	I0729 11:48:36.619486   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 11:48:36.619509   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:36.619565   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:38.577482   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.95789492s)
	I0729 11:48:38.577507   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 11:48:38.577529   69419 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:38.577568   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:39.229623   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 11:48:39.229672   69419 cache_images.go:123] Successfully loaded all cached images
	I0729 11:48:39.229679   69419 cache_images.go:92] duration metric: took 14.67144672s to LoadCachedImages
	I0729 11:48:39.229693   69419 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.0-beta.0 crio true true} ...
	I0729 11:48:39.229817   69419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-297799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:39.229881   69419 ssh_runner.go:195] Run: crio config
	I0729 11:48:39.275907   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:39.275926   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:39.275934   69419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:39.275954   69419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-297799 NodeName:no-preload-297799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:48:39.276122   69419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-297799"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:39.276192   69419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 11:48:39.286552   69419 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:39.286610   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:39.296058   69419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 11:48:39.318154   69419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 11:48:39.335437   69419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 11:48:39.354036   69419 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:39.358009   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:39.370253   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:35.994913   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:38.483330   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:35.297061   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.796828   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.296156   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.796245   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.297045   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.796103   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.296453   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.796146   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.296670   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.796357   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.016488   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:41.515245   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:39.512699   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:39.531458   69419 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799 for IP: 192.168.39.120
	I0729 11:48:39.531482   69419 certs.go:194] generating shared ca certs ...
	I0729 11:48:39.531502   69419 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:39.531676   69419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:39.531730   69419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:39.531743   69419 certs.go:256] generating profile certs ...
	I0729 11:48:39.531841   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/client.key
	I0729 11:48:39.531928   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key.7b715e25
	I0729 11:48:39.531975   69419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key
	I0729 11:48:39.532117   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:39.532153   69419 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:39.532167   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:39.532197   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:39.532227   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:39.532258   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:39.532304   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:39.532940   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:39.571271   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:39.596824   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:39.622112   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:39.655054   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 11:48:39.693252   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:39.717845   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:39.746725   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:39.772098   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:39.798075   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:39.824675   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:39.849863   69419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:39.867759   69419 ssh_runner.go:195] Run: openssl version
	I0729 11:48:39.874159   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:39.885596   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890166   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890229   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.896413   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:39.907803   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:39.920270   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925216   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925279   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.931316   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:39.942774   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:39.954592   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959366   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959422   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.965437   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:39.976951   69419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:39.983054   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:39.989909   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:39.995930   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:40.002178   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:40.008426   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:40.014841   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:40.021729   69419 kubeadm.go:392] StartCluster: {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:40.021848   69419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:40.021908   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.075370   69419 cri.go:89] found id: ""
	I0729 11:48:40.075473   69419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:40.086268   69419 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:40.086293   69419 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:40.086367   69419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:40.097168   69419 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:40.098369   69419 kubeconfig.go:125] found "no-preload-297799" server: "https://192.168.39.120:8443"
	I0729 11:48:40.100676   69419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:40.111832   69419 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I0729 11:48:40.111874   69419 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:40.111885   69419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:40.111927   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.151936   69419 cri.go:89] found id: ""
	I0729 11:48:40.152000   69419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:40.170773   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:40.181342   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:40.181363   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:40.181408   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:40.190984   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:40.191052   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:40.200668   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:40.209597   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:40.209645   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:40.219194   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.228788   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:40.228861   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.238965   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:40.248308   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:40.248390   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:40.257904   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:40.267645   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:40.379761   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.272628   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.487426   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.563792   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.657159   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:41.657265   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.158209   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.657442   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.712325   69419 api_server.go:72] duration metric: took 1.055172636s to wait for apiserver process to appear ...
	I0729 11:48:42.712357   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:48:42.712378   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:40.978804   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:42.979615   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:40.296481   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:40.796161   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.296479   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.796634   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.296314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.796986   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.297060   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.296048   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.796488   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.619558   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.619623   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.619639   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.629929   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.629961   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.713181   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.764383   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.764415   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:46.213129   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.217584   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.217613   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:46.713358   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.719382   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.719421   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:47.212915   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:47.218414   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:48:47.230158   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:48:47.230187   69419 api_server.go:131] duration metric: took 4.517823741s to wait for apiserver health ...
	I0729 11:48:47.230197   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:47.230203   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:47.232409   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:48:44.015604   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:46.514213   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:48.514660   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.233803   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:48:47.254784   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:48:47.278258   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:48:47.307307   69419 system_pods.go:59] 8 kube-system pods found
	I0729 11:48:47.307354   69419 system_pods.go:61] "coredns-5cfdc65f69-qz5f7" [12c37abb-1db8-4c96-8bf7-be9487c821df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:48:47.307368   69419 system_pods.go:61] "etcd-no-preload-297799" [95565d29-e8c5-4f33-84d9-a2604d25440d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:48:47.307380   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [870e0ec0-87db-4fee-b8ba-d08654d036de] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:48:47.307389   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [12bf09f7-8084-47fb-b268-c9eccf906ce8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:48:47.307397   69419 system_pods.go:61] "kube-proxy-ggh4w" [5455f099-4470-4551-864e-5e855b77f94f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:48:47.307405   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [e88dae86-cfc6-456f-b14a-ebaaeac5f758] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:48:47.307416   69419 system_pods.go:61] "metrics-server-78fcd8795b-x4t76" [874f9fbe-8ded-48ba-993d-53cbded78379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:48:47.307423   69419 system_pods.go:61] "storage-provisioner" [8ca54feb-faf5-4e75-aef5-b7c57b89c429] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:48:47.307434   69419 system_pods.go:74] duration metric: took 29.153842ms to wait for pod list to return data ...
	I0729 11:48:47.307447   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:48:47.324625   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:48:47.324677   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:48:47.324691   69419 node_conditions.go:105] duration metric: took 17.237885ms to run NodePressure ...
	I0729 11:48:47.324711   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:47.612726   69419 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619335   69419 kubeadm.go:739] kubelet initialised
	I0729 11:48:47.619356   69419 kubeadm.go:740] duration metric: took 6.608982ms waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619364   69419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:48:47.625462   69419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:45.479610   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.481743   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.978596   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:45.297079   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.796411   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.297077   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.796676   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.296378   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.796359   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.296252   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.796407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.296669   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.796307   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.516689   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:53.016717   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.632321   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.131647   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.633099   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:52.633127   69419 pod_ready.go:81] duration metric: took 5.007638065s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.633136   69419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.480576   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.979758   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:50.296349   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.796922   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.296977   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.797050   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.296223   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.796774   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.296621   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.796300   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.296480   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.796902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.515017   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:57.515244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.640065   69419 pod_ready.go:102] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.648288   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.648318   69419 pod_ready.go:81] duration metric: took 4.015175534s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.648327   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.653979   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.654012   69419 pod_ready.go:81] duration metric: took 5.676586ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.654027   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664507   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.664533   69419 pod_ready.go:81] duration metric: took 10.499453ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664544   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669414   69419 pod_ready.go:92] pod "kube-proxy-ggh4w" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.669439   69419 pod_ready.go:81] duration metric: took 4.888994ms for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669449   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673888   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.673913   69419 pod_ready.go:81] duration metric: took 4.457007ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673924   69419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:58.682501   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.982680   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:59.479587   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:55.296128   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.796141   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.296196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.796435   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.296155   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.796741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.796190   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.296902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.797062   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.013753   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:02.014435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.180620   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.183481   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.481530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.978979   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:00.296074   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.796430   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.296402   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.796722   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.296594   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.297020   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.796865   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.296072   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.796318   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.015636   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:06.514933   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.681102   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.681462   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.979240   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.979773   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.979865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.296957   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:05.796485   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.296051   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.296457   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.796342   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.796449   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.297078   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.796713   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.014934   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:11.515032   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:13.515665   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.683191   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.181155   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.182012   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.482327   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.979064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:10.296959   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:10.796677   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.296975   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.796715   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.296262   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.796937   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:13.296939   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:13.297014   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:13.340009   70480 cri.go:89] found id: ""
	I0729 11:49:13.340034   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.340041   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:13.340047   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:13.340112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:13.378003   70480 cri.go:89] found id: ""
	I0729 11:49:13.378029   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.378037   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:13.378044   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:13.378098   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:13.417130   70480 cri.go:89] found id: ""
	I0729 11:49:13.417162   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.417169   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:13.417175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:13.417253   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:13.461969   70480 cri.go:89] found id: ""
	I0729 11:49:13.462001   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.462012   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:13.462019   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:13.462089   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:13.502341   70480 cri.go:89] found id: ""
	I0729 11:49:13.502369   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.502375   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:13.502382   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:13.502434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:13.538099   70480 cri.go:89] found id: ""
	I0729 11:49:13.538123   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.538137   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:13.538143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:13.538192   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:13.575136   70480 cri.go:89] found id: ""
	I0729 11:49:13.575165   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.575172   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:13.575180   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:13.575241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:13.609066   70480 cri.go:89] found id: ""
	I0729 11:49:13.609098   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.609106   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:13.609114   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:13.609126   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:13.622813   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:13.622842   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:13.765497   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:13.765519   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:13.765532   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:13.831517   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:13.831554   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:13.877738   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:13.877771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.015086   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:18.514995   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.683827   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.180229   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.979975   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.479362   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.429475   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:16.447454   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:16.447528   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:16.486991   70480 cri.go:89] found id: ""
	I0729 11:49:16.487018   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.487028   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:16.487036   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:16.487095   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:16.526155   70480 cri.go:89] found id: ""
	I0729 11:49:16.526180   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.526187   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:16.526192   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:16.526251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:16.562054   70480 cri.go:89] found id: ""
	I0729 11:49:16.562080   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.562156   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:16.562175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:16.562229   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:16.598866   70480 cri.go:89] found id: ""
	I0729 11:49:16.598896   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.598907   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:16.598915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:16.598984   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:16.637590   70480 cri.go:89] found id: ""
	I0729 11:49:16.637615   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.637623   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:16.637628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:16.637677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:16.676712   70480 cri.go:89] found id: ""
	I0729 11:49:16.676738   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.676749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:16.676756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:16.676844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:16.717213   70480 cri.go:89] found id: ""
	I0729 11:49:16.717242   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.717250   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:16.717256   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:16.717309   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:16.755619   70480 cri.go:89] found id: ""
	I0729 11:49:16.755644   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.755652   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:16.755660   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:16.755677   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:16.825987   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:16.826023   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:16.869351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:16.869386   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.920850   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:16.920888   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:16.935884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:16.935921   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:17.021524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.521810   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:19.534761   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:19.534826   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:19.571988   70480 cri.go:89] found id: ""
	I0729 11:49:19.572019   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.572029   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:19.572037   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:19.572097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:19.607389   70480 cri.go:89] found id: ""
	I0729 11:49:19.607418   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.607427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:19.607434   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:19.607496   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:19.645810   70480 cri.go:89] found id: ""
	I0729 11:49:19.645842   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.645853   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:19.645861   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:19.645924   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:19.683628   70480 cri.go:89] found id: ""
	I0729 11:49:19.683655   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.683663   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:19.683669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:19.683715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:19.719396   70480 cri.go:89] found id: ""
	I0729 11:49:19.719424   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.719435   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:19.719442   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:19.719503   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:19.755341   70480 cri.go:89] found id: ""
	I0729 11:49:19.755372   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.755383   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:19.755390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:19.755443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:19.792562   70480 cri.go:89] found id: ""
	I0729 11:49:19.792594   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.792604   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:19.792611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:19.792674   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:19.826783   70480 cri.go:89] found id: ""
	I0729 11:49:19.826808   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.826815   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:19.826824   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:19.826835   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:19.878538   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:19.878573   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:19.893066   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:19.893094   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:19.966152   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.966177   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:19.966190   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:20.042796   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:20.042831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:20.515422   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.016350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.681192   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.681786   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.486048   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.979078   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:22.581639   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:22.595713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:22.595791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:22.639198   70480 cri.go:89] found id: ""
	I0729 11:49:22.639227   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.639239   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:22.639247   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:22.639304   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:22.681073   70480 cri.go:89] found id: ""
	I0729 11:49:22.681103   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.681117   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:22.681124   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:22.681183   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:22.717186   70480 cri.go:89] found id: ""
	I0729 11:49:22.717216   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.717226   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:22.717233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:22.717293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:22.755508   70480 cri.go:89] found id: ""
	I0729 11:49:22.755536   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.755546   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:22.755563   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:22.755626   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:22.800450   70480 cri.go:89] found id: ""
	I0729 11:49:22.800484   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.800495   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:22.800503   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:22.800567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:22.845555   70480 cri.go:89] found id: ""
	I0729 11:49:22.845581   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.845588   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:22.845594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:22.845643   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:22.895449   70480 cri.go:89] found id: ""
	I0729 11:49:22.895476   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.895483   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:22.895488   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:22.895536   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:22.943041   70480 cri.go:89] found id: ""
	I0729 11:49:22.943071   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.943081   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:22.943092   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:22.943108   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:23.002403   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:23.002448   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:23.019436   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:23.019463   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:23.090680   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:23.090718   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:23.090734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:23.173647   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:23.173687   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:25.515416   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.014796   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.181898   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.680932   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.481482   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.980230   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:25.719961   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:25.733760   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:25.733829   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:25.769965   70480 cri.go:89] found id: ""
	I0729 11:49:25.769997   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.770008   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:25.770015   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:25.770079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:25.805786   70480 cri.go:89] found id: ""
	I0729 11:49:25.805818   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.805829   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:25.805836   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:25.805899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:25.841948   70480 cri.go:89] found id: ""
	I0729 11:49:25.841978   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.841988   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:25.841996   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:25.842056   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:25.880600   70480 cri.go:89] found id: ""
	I0729 11:49:25.880626   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.880636   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:25.880644   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:25.880710   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:25.918637   70480 cri.go:89] found id: ""
	I0729 11:49:25.918671   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.918683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:25.918691   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:25.918766   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:25.955683   70480 cri.go:89] found id: ""
	I0729 11:49:25.955716   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.955726   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:25.955733   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:25.955793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:25.991796   70480 cri.go:89] found id: ""
	I0729 11:49:25.991826   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.991835   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:25.991844   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:25.991908   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:26.027595   70480 cri.go:89] found id: ""
	I0729 11:49:26.027623   70480 logs.go:276] 0 containers: []
	W0729 11:49:26.027634   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:26.027644   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:26.027658   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:26.114463   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:26.114500   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:26.156798   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:26.156834   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:26.206910   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:26.206940   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:26.221037   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:26.221065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:26.293788   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:28.794321   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:28.809573   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:28.809632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:28.860918   70480 cri.go:89] found id: ""
	I0729 11:49:28.860945   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.860952   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:28.860958   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:28.861011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:28.913038   70480 cri.go:89] found id: ""
	I0729 11:49:28.913069   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.913078   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:28.913085   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:28.913147   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:28.963679   70480 cri.go:89] found id: ""
	I0729 11:49:28.963704   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.963714   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:28.963722   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:28.963787   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:29.003935   70480 cri.go:89] found id: ""
	I0729 11:49:29.003962   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.003970   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:29.003976   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:29.004033   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:29.044990   70480 cri.go:89] found id: ""
	I0729 11:49:29.045027   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.045034   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:29.045040   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:29.045096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:29.079923   70480 cri.go:89] found id: ""
	I0729 11:49:29.079945   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.079953   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:29.079958   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:29.080004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:29.117495   70480 cri.go:89] found id: ""
	I0729 11:49:29.117520   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.117528   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:29.117534   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:29.117580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:29.155254   70480 cri.go:89] found id: ""
	I0729 11:49:29.155285   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.155295   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:29.155305   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:29.155319   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:29.207659   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:29.207698   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:29.221875   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:29.221904   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:29.295613   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:29.295637   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:29.295647   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:29.376114   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:29.376148   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:30.515987   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.015616   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:30.687554   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.180446   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.480064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.480740   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.916592   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:31.930301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:31.930373   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:31.969552   70480 cri.go:89] found id: ""
	I0729 11:49:31.969580   70480 logs.go:276] 0 containers: []
	W0729 11:49:31.969588   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:31.969594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:31.969650   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:32.003765   70480 cri.go:89] found id: ""
	I0729 11:49:32.003795   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.003804   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:32.003811   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:32.003873   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:32.038447   70480 cri.go:89] found id: ""
	I0729 11:49:32.038475   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.038486   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:32.038492   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:32.038558   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:32.077766   70480 cri.go:89] found id: ""
	I0729 11:49:32.077793   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.077805   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:32.077813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:32.077866   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:32.112599   70480 cri.go:89] found id: ""
	I0729 11:49:32.112630   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.112640   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:32.112648   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:32.112711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:32.150385   70480 cri.go:89] found id: ""
	I0729 11:49:32.150410   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.150417   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:32.150423   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:32.150481   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:32.185148   70480 cri.go:89] found id: ""
	I0729 11:49:32.185172   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.185182   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:32.185189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:32.185251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:32.219652   70480 cri.go:89] found id: ""
	I0729 11:49:32.219685   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.219696   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:32.219706   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:32.219720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:32.233440   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:32.233468   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:32.300495   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:32.300523   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:32.300540   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:32.381361   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:32.381400   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:32.422678   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:32.422730   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:34.974183   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:34.987832   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:34.987912   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:35.028216   70480 cri.go:89] found id: ""
	I0729 11:49:35.028251   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.028262   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:35.028269   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:35.028333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:35.067585   70480 cri.go:89] found id: ""
	I0729 11:49:35.067616   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.067626   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:35.067634   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:35.067698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:35.103319   70480 cri.go:89] found id: ""
	I0729 11:49:35.103346   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.103355   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:35.103362   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:35.103426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:35.138637   70480 cri.go:89] found id: ""
	I0729 11:49:35.138673   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.138714   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:35.138726   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:35.138804   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:35.176249   70480 cri.go:89] found id: ""
	I0729 11:49:35.176285   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.176293   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:35.176298   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:35.176358   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:35.218168   70480 cri.go:89] found id: ""
	I0729 11:49:35.218194   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.218202   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:35.218208   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:35.218265   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:35.515188   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.518451   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.180771   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.181078   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.979448   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:38.482849   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.253594   70480 cri.go:89] found id: ""
	I0729 11:49:35.253634   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.253641   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:35.253655   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:35.253716   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:35.289229   70480 cri.go:89] found id: ""
	I0729 11:49:35.289258   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.289269   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:35.289279   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:35.289294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:35.341152   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:35.341186   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:35.355884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:35.355925   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:35.426135   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:35.426160   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:35.426172   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:35.508387   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:35.508422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.047364   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:38.061026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:38.061088   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:38.096252   70480 cri.go:89] found id: ""
	I0729 11:49:38.096280   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.096290   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:38.096297   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:38.096365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:38.133007   70480 cri.go:89] found id: ""
	I0729 11:49:38.133040   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.133051   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:38.133058   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:38.133130   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:38.168040   70480 cri.go:89] found id: ""
	I0729 11:49:38.168063   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.168073   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:38.168086   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:38.168160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:38.205440   70480 cri.go:89] found id: ""
	I0729 11:49:38.205464   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.205471   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:38.205476   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:38.205524   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:38.243345   70480 cri.go:89] found id: ""
	I0729 11:49:38.243373   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.243383   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:38.243390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:38.243449   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:38.278506   70480 cri.go:89] found id: ""
	I0729 11:49:38.278537   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.278549   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:38.278557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:38.278616   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:38.317917   70480 cri.go:89] found id: ""
	I0729 11:49:38.317951   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.317962   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:38.317970   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:38.318032   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:38.353198   70480 cri.go:89] found id: ""
	I0729 11:49:38.353228   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.353236   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:38.353245   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:38.353259   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:38.367239   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:38.367268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:38.445964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:38.445989   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:38.446002   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:38.528232   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:38.528268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.565958   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:38.565992   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:40.014625   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:42.015244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:39.682072   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.682635   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:44.180224   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:40.979943   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:43.481875   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.123139   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:41.137233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:41.137299   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:41.179468   70480 cri.go:89] found id: ""
	I0729 11:49:41.179492   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.179502   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:41.179508   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:41.179559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:41.215586   70480 cri.go:89] found id: ""
	I0729 11:49:41.215612   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.215620   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:41.215625   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:41.215682   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:41.259473   70480 cri.go:89] found id: ""
	I0729 11:49:41.259495   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.259503   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:41.259508   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:41.259562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:41.302914   70480 cri.go:89] found id: ""
	I0729 11:49:41.302939   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.302947   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:41.302953   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:41.303012   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:41.339826   70480 cri.go:89] found id: ""
	I0729 11:49:41.339857   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.339868   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:41.339876   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:41.339944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:41.376044   70480 cri.go:89] found id: ""
	I0729 11:49:41.376067   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.376074   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:41.376080   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:41.376126   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:41.412216   70480 cri.go:89] found id: ""
	I0729 11:49:41.412241   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.412249   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:41.412255   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:41.412311   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:41.448264   70480 cri.go:89] found id: ""
	I0729 11:49:41.448294   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.448305   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:41.448315   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:41.448331   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:41.499936   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:41.499974   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:41.517126   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:41.517151   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:41.590153   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:41.590185   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:41.590201   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:41.670830   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:41.670866   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.212782   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:44.226750   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:44.226815   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:44.261492   70480 cri.go:89] found id: ""
	I0729 11:49:44.261517   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.261524   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:44.261530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:44.261577   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:44.296391   70480 cri.go:89] found id: ""
	I0729 11:49:44.296426   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.296435   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:44.296444   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:44.296510   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:44.333335   70480 cri.go:89] found id: ""
	I0729 11:49:44.333365   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.333377   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:44.333384   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:44.333447   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:44.370607   70480 cri.go:89] found id: ""
	I0729 11:49:44.370639   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.370650   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:44.370657   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:44.370734   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:44.404229   70480 cri.go:89] found id: ""
	I0729 11:49:44.404257   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.404265   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:44.404271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:44.404332   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:44.439198   70480 cri.go:89] found id: ""
	I0729 11:49:44.439227   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.439238   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:44.439244   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:44.439302   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:44.473835   70480 cri.go:89] found id: ""
	I0729 11:49:44.473887   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.473899   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:44.473908   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:44.473971   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:44.518006   70480 cri.go:89] found id: ""
	I0729 11:49:44.518031   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.518040   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:44.518050   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:44.518065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.560188   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:44.560222   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:44.609565   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:44.609602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:44.624787   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:44.624826   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:44.707388   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:44.707410   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:44.707422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:44.515480   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.013967   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:46.181170   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:48.680460   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:45.482413   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.484420   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:49.982145   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.283951   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:47.297013   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:47.297080   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:47.331979   70480 cri.go:89] found id: ""
	I0729 11:49:47.332009   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.332018   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:47.332023   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:47.332071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:47.367888   70480 cri.go:89] found id: ""
	I0729 11:49:47.367914   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.367925   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:47.367931   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:47.367991   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:47.409364   70480 cri.go:89] found id: ""
	I0729 11:49:47.409392   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.409404   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:47.409410   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:47.409462   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:47.442554   70480 cri.go:89] found id: ""
	I0729 11:49:47.442583   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.442594   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:47.442602   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:47.442656   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:47.476662   70480 cri.go:89] found id: ""
	I0729 11:49:47.476692   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.476704   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:47.476713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:47.476775   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:47.513779   70480 cri.go:89] found id: ""
	I0729 11:49:47.513809   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.513819   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:47.513827   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:47.513885   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:47.550023   70480 cri.go:89] found id: ""
	I0729 11:49:47.550047   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.550053   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:47.550059   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:47.550120   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:47.586131   70480 cri.go:89] found id: ""
	I0729 11:49:47.586157   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.586165   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:47.586174   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:47.586187   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:47.671326   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:47.671365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:47.710573   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:47.710601   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:47.763248   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:47.763284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:47.779516   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:47.779545   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:47.857474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:49.014878   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:51.515152   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.515473   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.682492   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.179515   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:52.479384   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:54.980972   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.358275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:50.371438   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:50.371501   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:50.408776   70480 cri.go:89] found id: ""
	I0729 11:49:50.408803   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.408813   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:50.408820   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:50.408881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:50.443502   70480 cri.go:89] found id: ""
	I0729 11:49:50.443528   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.443536   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:50.443541   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:50.443600   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:50.480429   70480 cri.go:89] found id: ""
	I0729 11:49:50.480454   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.480463   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:50.480470   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:50.480525   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:50.518753   70480 cri.go:89] found id: ""
	I0729 11:49:50.518779   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.518789   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:50.518796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:50.518838   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:50.555970   70480 cri.go:89] found id: ""
	I0729 11:49:50.556000   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.556010   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:50.556022   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:50.556086   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:50.592342   70480 cri.go:89] found id: ""
	I0729 11:49:50.592374   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.592385   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:50.592392   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:50.592458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:50.628772   70480 cri.go:89] found id: ""
	I0729 11:49:50.628801   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.628813   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:50.628859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:50.628919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:50.677549   70480 cri.go:89] found id: ""
	I0729 11:49:50.677579   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.677588   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:50.677598   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:50.677612   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:50.734543   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:50.734579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:50.749418   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:50.749445   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:50.825728   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:50.825754   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:50.825773   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:50.901579   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:50.901615   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.439920   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:53.453322   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:53.453381   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:53.491594   70480 cri.go:89] found id: ""
	I0729 11:49:53.491622   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.491632   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:53.491638   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:53.491698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:53.527160   70480 cri.go:89] found id: ""
	I0729 11:49:53.527188   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.527201   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:53.527207   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:53.527264   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:53.563787   70480 cri.go:89] found id: ""
	I0729 11:49:53.563819   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.563830   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:53.563838   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:53.563899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:53.601551   70480 cri.go:89] found id: ""
	I0729 11:49:53.601575   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.601583   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:53.601589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:53.601634   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:53.636711   70480 cri.go:89] found id: ""
	I0729 11:49:53.636738   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.636748   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:53.636755   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:53.636824   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:53.677809   70480 cri.go:89] found id: ""
	I0729 11:49:53.677852   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.677864   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:53.677872   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:53.677932   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:53.714548   70480 cri.go:89] found id: ""
	I0729 11:49:53.714579   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.714590   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:53.714597   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:53.714663   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:53.751406   70480 cri.go:89] found id: ""
	I0729 11:49:53.751438   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.751448   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:53.751459   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:53.751474   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:53.834905   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:53.834942   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.880818   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:53.880852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:53.935913   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:53.935948   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:53.950053   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:53.950078   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:54.027378   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.014381   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:58.513958   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:55.180502   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.181274   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.182119   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.479530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.981806   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:56.528379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:56.541859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:56.541930   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:56.580566   70480 cri.go:89] found id: ""
	I0729 11:49:56.580612   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.580621   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:56.580629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:56.580687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:56.616390   70480 cri.go:89] found id: ""
	I0729 11:49:56.616419   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.616427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:56.616433   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:56.616483   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:56.653245   70480 cri.go:89] found id: ""
	I0729 11:49:56.653273   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.653281   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:56.653286   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:56.653345   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:56.693011   70480 cri.go:89] found id: ""
	I0729 11:49:56.693033   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.693041   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:56.693047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:56.693115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:56.729684   70480 cri.go:89] found id: ""
	I0729 11:49:56.729714   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.729723   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:56.729736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:56.729799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:56.764634   70480 cri.go:89] found id: ""
	I0729 11:49:56.764675   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.764684   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:56.764692   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:56.764753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:56.800672   70480 cri.go:89] found id: ""
	I0729 11:49:56.800703   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.800714   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:56.800721   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:56.800784   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:56.838727   70480 cri.go:89] found id: ""
	I0729 11:49:56.838758   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.838769   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:56.838781   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:56.838794   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:56.918017   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.918043   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:56.918057   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:57.011900   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:57.011951   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:57.055320   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:57.055350   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:57.113681   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:57.113725   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:59.629516   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:59.643794   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:59.643872   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:59.682543   70480 cri.go:89] found id: ""
	I0729 11:49:59.682571   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.682580   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:59.682586   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:59.682649   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:59.719313   70480 cri.go:89] found id: ""
	I0729 11:49:59.719341   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.719352   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:59.719360   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:59.719415   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:59.759567   70480 cri.go:89] found id: ""
	I0729 11:49:59.759593   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.759603   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:59.759611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:59.759668   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:59.796159   70480 cri.go:89] found id: ""
	I0729 11:49:59.796180   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.796187   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:59.796192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:59.796247   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:59.836178   70480 cri.go:89] found id: ""
	I0729 11:49:59.836199   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.836207   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:59.836212   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:59.836263   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:59.876751   70480 cri.go:89] found id: ""
	I0729 11:49:59.876783   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.876795   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:59.876802   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:59.876863   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:59.916163   70480 cri.go:89] found id: ""
	I0729 11:49:59.916196   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.916207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:59.916217   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:59.916281   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:59.953581   70480 cri.go:89] found id: ""
	I0729 11:49:59.953611   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.953621   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:59.953631   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:59.953649   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:00.009128   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:00.009167   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:00.024681   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:00.024710   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:00.098939   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:00.098966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:00.098980   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:00.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:00.186125   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:01.015333   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:03.017456   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:01.682621   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.180814   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.480490   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.481157   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.726382   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:02.740727   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:02.740799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:02.779624   70480 cri.go:89] found id: ""
	I0729 11:50:02.779653   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.779664   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:02.779672   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:02.779731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:02.816044   70480 cri.go:89] found id: ""
	I0729 11:50:02.816076   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.816087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:02.816094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:02.816168   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:02.859406   70480 cri.go:89] found id: ""
	I0729 11:50:02.859434   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.859445   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:02.859453   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:02.859514   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:02.897019   70480 cri.go:89] found id: ""
	I0729 11:50:02.897049   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.897058   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:02.897064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:02.897123   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:02.936810   70480 cri.go:89] found id: ""
	I0729 11:50:02.936843   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.936854   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:02.936860   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:02.936919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:02.973391   70480 cri.go:89] found id: ""
	I0729 11:50:02.973412   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.973420   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:02.973426   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:02.973485   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:03.010867   70480 cri.go:89] found id: ""
	I0729 11:50:03.010961   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.010983   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:03.011001   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:03.011082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:03.048287   70480 cri.go:89] found id: ""
	I0729 11:50:03.048321   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.048332   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:03.048343   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:03.048360   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:03.102752   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:03.102790   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:03.117732   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:03.117759   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:03.193620   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:03.193643   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:03.193655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:03.277205   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:03.277250   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:05.513602   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:07.514141   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.181449   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:08.682052   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.980021   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:09.479308   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:05.833546   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:05.849129   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:05.849191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:05.893059   70480 cri.go:89] found id: ""
	I0729 11:50:05.893094   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.893105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:05.893113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:05.893182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:05.927844   70480 cri.go:89] found id: ""
	I0729 11:50:05.927879   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.927889   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:05.927896   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:05.927960   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:05.964427   70480 cri.go:89] found id: ""
	I0729 11:50:05.964451   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.964458   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:05.964464   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:05.964509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:06.001963   70480 cri.go:89] found id: ""
	I0729 11:50:06.001995   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.002002   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:06.002008   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:06.002055   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:06.038838   70480 cri.go:89] found id: ""
	I0729 11:50:06.038869   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.038880   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:06.038888   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:06.038948   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:06.073952   70480 cri.go:89] found id: ""
	I0729 11:50:06.073985   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.073995   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:06.074003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:06.074063   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:06.110495   70480 cri.go:89] found id: ""
	I0729 11:50:06.110524   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.110535   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:06.110541   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:06.110603   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:06.146851   70480 cri.go:89] found id: ""
	I0729 11:50:06.146893   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.146904   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:06.146915   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:06.146931   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:06.201779   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:06.201814   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:06.216407   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:06.216434   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:06.294362   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:06.294382   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:06.294394   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:06.374381   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:06.374415   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:08.920326   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:08.933875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:08.933940   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:08.974589   70480 cri.go:89] found id: ""
	I0729 11:50:08.974615   70480 logs.go:276] 0 containers: []
	W0729 11:50:08.974623   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:08.974629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:08.974691   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:09.013032   70480 cri.go:89] found id: ""
	I0729 11:50:09.013056   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.013066   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:09.013075   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:09.013121   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:09.052365   70480 cri.go:89] found id: ""
	I0729 11:50:09.052390   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.052397   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:09.052402   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:09.052450   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:09.098671   70480 cri.go:89] found id: ""
	I0729 11:50:09.098719   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.098731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:09.098739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:09.098799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:09.148033   70480 cri.go:89] found id: ""
	I0729 11:50:09.148062   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.148083   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:09.148091   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:09.148151   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:09.195140   70480 cri.go:89] found id: ""
	I0729 11:50:09.195172   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.195179   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:09.195185   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:09.195244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:09.242315   70480 cri.go:89] found id: ""
	I0729 11:50:09.242346   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.242356   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:09.242364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:09.242428   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:09.278315   70480 cri.go:89] found id: ""
	I0729 11:50:09.278342   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.278353   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:09.278364   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:09.278377   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:09.327622   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:09.327654   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:09.342383   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:09.342416   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:09.420797   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:09.420821   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:09.420832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:09.499308   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:09.499345   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:09.514809   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.515103   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.515311   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.181981   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.681128   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.480200   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.480991   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:12.042649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:12.057927   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:12.057996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:12.096129   70480 cri.go:89] found id: ""
	I0729 11:50:12.096159   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.096170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:12.096177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:12.096244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:12.133849   70480 cri.go:89] found id: ""
	I0729 11:50:12.133880   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.133891   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:12.133898   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:12.133963   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:12.171706   70480 cri.go:89] found id: ""
	I0729 11:50:12.171730   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.171738   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:12.171744   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:12.171810   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:12.211248   70480 cri.go:89] found id: ""
	I0729 11:50:12.211285   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.211307   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:12.211315   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:12.211379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:12.247472   70480 cri.go:89] found id: ""
	I0729 11:50:12.247500   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.247510   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:12.247517   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:12.247578   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:12.283818   70480 cri.go:89] found id: ""
	I0729 11:50:12.283847   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.283859   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:12.283866   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:12.283937   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:12.324453   70480 cri.go:89] found id: ""
	I0729 11:50:12.324478   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.324485   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:12.324490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:12.324541   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:12.362504   70480 cri.go:89] found id: ""
	I0729 11:50:12.362531   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.362538   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:12.362546   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:12.362558   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:12.439250   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:12.439278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:12.439295   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:12.521240   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:12.521271   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:12.561881   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:12.561918   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:12.615509   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:12.615548   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:15.130823   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:15.151388   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:15.151457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:15.209596   70480 cri.go:89] found id: ""
	I0729 11:50:15.209645   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.209658   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:15.209668   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:15.209736   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:15.515486   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:18.014350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.681466   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.686021   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.979592   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.980955   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.248351   70480 cri.go:89] found id: ""
	I0729 11:50:15.248383   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.248394   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:15.248402   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:15.248459   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:15.287261   70480 cri.go:89] found id: ""
	I0729 11:50:15.287288   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.287296   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:15.287301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:15.287356   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:15.325197   70480 cri.go:89] found id: ""
	I0729 11:50:15.325221   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.325229   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:15.325234   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:15.325292   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:15.361903   70480 cri.go:89] found id: ""
	I0729 11:50:15.361930   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.361939   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:15.361944   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:15.361994   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:15.399963   70480 cri.go:89] found id: ""
	I0729 11:50:15.399996   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.400007   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:15.400015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:15.400079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:15.437366   70480 cri.go:89] found id: ""
	I0729 11:50:15.437400   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.437409   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:15.437414   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:15.437476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:15.473784   70480 cri.go:89] found id: ""
	I0729 11:50:15.473810   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.473827   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:15.473837   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:15.473852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:15.550294   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:15.550328   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:15.550343   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:15.636252   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:15.636297   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:15.681361   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:15.681397   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:15.735415   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:15.735451   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.250767   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:18.265247   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:18.265319   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:18.302793   70480 cri.go:89] found id: ""
	I0729 11:50:18.302819   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.302827   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:18.302833   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:18.302894   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:18.345507   70480 cri.go:89] found id: ""
	I0729 11:50:18.345541   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.345551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:18.345558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:18.345621   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:18.381646   70480 cri.go:89] found id: ""
	I0729 11:50:18.381675   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.381682   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:18.381688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:18.381750   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:18.417233   70480 cri.go:89] found id: ""
	I0729 11:50:18.417261   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.417268   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:18.417275   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:18.417340   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:18.453506   70480 cri.go:89] found id: ""
	I0729 11:50:18.453534   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.453541   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:18.453547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:18.453598   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:18.491886   70480 cri.go:89] found id: ""
	I0729 11:50:18.491910   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.491918   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:18.491923   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:18.491980   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:18.528413   70480 cri.go:89] found id: ""
	I0729 11:50:18.528444   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.528454   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:18.528462   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:18.528518   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:18.565621   70480 cri.go:89] found id: ""
	I0729 11:50:18.565653   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.565663   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:18.565673   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:18.565690   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:18.616796   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:18.616832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.631175   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:18.631202   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:18.712480   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:18.712506   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:18.712520   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:18.797246   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:18.797284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:20.514492   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:23.016174   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.181252   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.682450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.480316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.980474   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:21.344260   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:21.358689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:21.358781   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:21.393248   70480 cri.go:89] found id: ""
	I0729 11:50:21.393276   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.393286   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:21.393293   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:21.393352   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:21.426960   70480 cri.go:89] found id: ""
	I0729 11:50:21.426989   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.426999   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:21.427007   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:21.427066   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:21.463521   70480 cri.go:89] found id: ""
	I0729 11:50:21.463545   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.463553   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:21.463559   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:21.463612   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:21.501915   70480 cri.go:89] found id: ""
	I0729 11:50:21.501950   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.501960   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:21.501966   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:21.502023   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:21.538208   70480 cri.go:89] found id: ""
	I0729 11:50:21.538247   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.538258   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:21.538265   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:21.538327   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:21.572572   70480 cri.go:89] found id: ""
	I0729 11:50:21.572594   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.572602   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:21.572607   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:21.572664   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:21.608002   70480 cri.go:89] found id: ""
	I0729 11:50:21.608037   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.608046   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:21.608053   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:21.608103   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:21.643039   70480 cri.go:89] found id: ""
	I0729 11:50:21.643069   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.643080   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:21.643098   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:21.643115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:21.722921   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:21.722960   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:21.768597   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:21.768628   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:21.826974   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:21.827009   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:21.842214   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:21.842242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:21.913217   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:24.413629   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:24.428364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:24.428458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:24.469631   70480 cri.go:89] found id: ""
	I0729 11:50:24.469654   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.469661   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:24.469667   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:24.469712   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:24.508201   70480 cri.go:89] found id: ""
	I0729 11:50:24.508231   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.508242   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:24.508254   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:24.508317   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:24.543992   70480 cri.go:89] found id: ""
	I0729 11:50:24.544020   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.544028   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:24.544034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:24.544082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:24.579956   70480 cri.go:89] found id: ""
	I0729 11:50:24.579983   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.579990   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:24.579995   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:24.580051   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:24.616233   70480 cri.go:89] found id: ""
	I0729 11:50:24.616259   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.616267   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:24.616273   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:24.616339   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:24.655131   70480 cri.go:89] found id: ""
	I0729 11:50:24.655159   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.655167   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:24.655173   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:24.655223   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:24.701706   70480 cri.go:89] found id: ""
	I0729 11:50:24.701730   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.701738   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:24.701743   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:24.701799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:24.739729   70480 cri.go:89] found id: ""
	I0729 11:50:24.739754   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.739762   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:24.739773   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:24.739785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:24.817347   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:24.817390   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:24.858248   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:24.858274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:24.911486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:24.911527   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:24.927180   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:24.927209   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:25.007474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:25.515125   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.515919   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:24.682503   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.180867   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:29.181299   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:25.478971   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.979128   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.507887   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:27.521936   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:27.522004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:27.555832   70480 cri.go:89] found id: ""
	I0729 11:50:27.555865   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.555875   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:27.555882   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:27.555944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:27.590482   70480 cri.go:89] found id: ""
	I0729 11:50:27.590509   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.590518   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:27.590526   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:27.590587   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:27.626998   70480 cri.go:89] found id: ""
	I0729 11:50:27.627028   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.627038   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:27.627045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:27.627105   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:27.661980   70480 cri.go:89] found id: ""
	I0729 11:50:27.662015   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.662027   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:27.662034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:27.662096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:27.696646   70480 cri.go:89] found id: ""
	I0729 11:50:27.696675   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.696683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:27.696689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:27.696735   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:27.739393   70480 cri.go:89] found id: ""
	I0729 11:50:27.739421   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.739432   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:27.739439   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:27.739500   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:27.774985   70480 cri.go:89] found id: ""
	I0729 11:50:27.775013   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.775027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:27.775034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:27.775097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:27.811526   70480 cri.go:89] found id: ""
	I0729 11:50:27.811555   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.811567   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:27.811578   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:27.811594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:27.866445   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:27.866482   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:27.881961   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:27.881991   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:27.949524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:27.949543   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:27.949555   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:28.029386   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:28.029418   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:30.014858   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.515721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:31.183830   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:33.681416   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.479786   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.484195   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:34.978772   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.571850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:30.586163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:30.586241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:30.622065   70480 cri.go:89] found id: ""
	I0729 11:50:30.622096   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.622105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:30.622113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:30.622189   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:30.659346   70480 cri.go:89] found id: ""
	I0729 11:50:30.659386   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.659398   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:30.659405   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:30.659467   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:30.699376   70480 cri.go:89] found id: ""
	I0729 11:50:30.699403   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.699413   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:30.699421   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:30.699490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:30.734843   70480 cri.go:89] found id: ""
	I0729 11:50:30.734873   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.734892   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:30.734900   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:30.734974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:30.772983   70480 cri.go:89] found id: ""
	I0729 11:50:30.773010   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.773021   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:30.773028   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:30.773084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:30.814777   70480 cri.go:89] found id: ""
	I0729 11:50:30.814805   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.814815   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:30.814823   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:30.814891   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:30.851989   70480 cri.go:89] found id: ""
	I0729 11:50:30.852018   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.852027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:30.852036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:30.852094   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:30.891692   70480 cri.go:89] found id: ""
	I0729 11:50:30.891716   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.891732   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:30.891743   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:30.891758   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:30.943466   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:30.943498   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:30.957182   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:30.957208   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:31.029695   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:31.029717   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:31.029731   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:31.113329   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:31.113378   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:33.654275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:33.668509   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:33.668581   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:33.706103   70480 cri.go:89] found id: ""
	I0729 11:50:33.706133   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.706144   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:33.706151   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:33.706203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:33.743389   70480 cri.go:89] found id: ""
	I0729 11:50:33.743417   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.743424   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:33.743431   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:33.743482   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:33.781980   70480 cri.go:89] found id: ""
	I0729 11:50:33.782014   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.782025   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:33.782032   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:33.782092   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:33.818049   70480 cri.go:89] found id: ""
	I0729 11:50:33.818080   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.818090   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:33.818098   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:33.818164   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:33.854043   70480 cri.go:89] found id: ""
	I0729 11:50:33.854069   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.854077   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:33.854083   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:33.854144   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:33.891292   70480 cri.go:89] found id: ""
	I0729 11:50:33.891319   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.891329   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:33.891338   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:33.891400   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:33.927871   70480 cri.go:89] found id: ""
	I0729 11:50:33.927904   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.927915   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:33.927922   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:33.927979   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:33.964136   70480 cri.go:89] found id: ""
	I0729 11:50:33.964163   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.964170   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:33.964181   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:33.964195   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:34.015262   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:34.015292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:34.029677   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:34.029711   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:34.105907   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:34.105932   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:34.105945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:34.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:34.186120   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:35.014404   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:37.015435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:35.681610   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:38.181485   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.979912   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:39.480001   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.740552   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:36.754472   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:36.754533   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:36.793126   70480 cri.go:89] found id: ""
	I0729 11:50:36.793162   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.793170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:36.793178   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:36.793235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:36.831095   70480 cri.go:89] found id: ""
	I0729 11:50:36.831152   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.831166   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:36.831176   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:36.831235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:36.869236   70480 cri.go:89] found id: ""
	I0729 11:50:36.869266   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.869277   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:36.869284   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:36.869343   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:36.907163   70480 cri.go:89] found id: ""
	I0729 11:50:36.907195   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.907203   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:36.907209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:36.907267   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:36.943074   70480 cri.go:89] found id: ""
	I0729 11:50:36.943101   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.943110   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:36.943115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:36.943177   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:36.980411   70480 cri.go:89] found id: ""
	I0729 11:50:36.980432   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.980442   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:36.980449   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:36.980509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:37.017994   70480 cri.go:89] found id: ""
	I0729 11:50:37.018015   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.018028   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:37.018035   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:37.018091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:37.052761   70480 cri.go:89] found id: ""
	I0729 11:50:37.052787   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.052797   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:37.052806   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:37.052818   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:37.105925   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:37.105970   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:37.119829   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:37.119862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:37.198953   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:37.198992   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:37.199013   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:37.276947   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:37.276987   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:39.816196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:39.830387   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:39.830460   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:39.864879   70480 cri.go:89] found id: ""
	I0729 11:50:39.864914   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.864921   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:39.864927   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:39.864974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:39.902730   70480 cri.go:89] found id: ""
	I0729 11:50:39.902761   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.902772   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:39.902779   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:39.902832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:39.937630   70480 cri.go:89] found id: ""
	I0729 11:50:39.937656   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.937663   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:39.937669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:39.937718   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:39.972692   70480 cri.go:89] found id: ""
	I0729 11:50:39.972723   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.972731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:39.972736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:39.972798   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:40.016152   70480 cri.go:89] found id: ""
	I0729 11:50:40.016179   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.016187   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:40.016192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:40.016239   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:40.051206   70480 cri.go:89] found id: ""
	I0729 11:50:40.051233   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.051243   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:40.051249   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:40.051310   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:40.091007   70480 cri.go:89] found id: ""
	I0729 11:50:40.091039   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.091050   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:40.091057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:40.091122   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:40.130939   70480 cri.go:89] found id: ""
	I0729 11:50:40.130968   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.130979   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:40.130992   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:40.131011   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:40.210551   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:40.210579   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:40.210594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:39.514683   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.515289   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.515935   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.681167   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:42.683536   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.978995   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.979276   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.292853   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:40.292889   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:40.333337   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:40.333365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:40.383219   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:40.383254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:42.898275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:42.913287   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:42.913365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:42.948662   70480 cri.go:89] found id: ""
	I0729 11:50:42.948696   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.948709   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:42.948716   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:42.948768   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:42.989511   70480 cri.go:89] found id: ""
	I0729 11:50:42.989541   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.989551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:42.989558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:42.989609   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:43.025987   70480 cri.go:89] found id: ""
	I0729 11:50:43.026013   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.026021   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:43.026026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:43.026082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:43.062210   70480 cri.go:89] found id: ""
	I0729 11:50:43.062243   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.062253   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:43.062271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:43.062344   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:43.104967   70480 cri.go:89] found id: ""
	I0729 11:50:43.104990   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.104997   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:43.105003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:43.105081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:43.146433   70480 cri.go:89] found id: ""
	I0729 11:50:43.146467   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.146479   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:43.146487   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:43.146551   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:43.199618   70480 cri.go:89] found id: ""
	I0729 11:50:43.199647   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.199658   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:43.199665   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:43.199721   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:43.238994   70480 cri.go:89] found id: ""
	I0729 11:50:43.239025   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.239036   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:43.239053   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:43.239071   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:43.253185   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:43.253211   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:43.325381   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:43.325399   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:43.325410   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:43.408547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:43.408582   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:43.447251   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:43.447281   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:45.516120   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.015236   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:45.181461   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:47.682648   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.478782   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.479013   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.001731   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:46.017006   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:46.017084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:46.056451   70480 cri.go:89] found id: ""
	I0729 11:50:46.056480   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.056492   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:46.056500   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:46.056562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:46.094715   70480 cri.go:89] found id: ""
	I0729 11:50:46.094754   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.094762   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:46.094767   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:46.094817   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:46.131440   70480 cri.go:89] found id: ""
	I0729 11:50:46.131471   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.131483   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:46.131490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:46.131548   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:46.171239   70480 cri.go:89] found id: ""
	I0729 11:50:46.171264   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.171271   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:46.171278   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:46.171331   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:46.208060   70480 cri.go:89] found id: ""
	I0729 11:50:46.208094   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.208102   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:46.208108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:46.208162   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:46.244765   70480 cri.go:89] found id: ""
	I0729 11:50:46.244797   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.244806   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:46.244813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:46.244874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:46.281932   70480 cri.go:89] found id: ""
	I0729 11:50:46.281965   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.281977   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:46.281986   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:46.282058   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:46.319589   70480 cri.go:89] found id: ""
	I0729 11:50:46.319618   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.319629   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:46.319640   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:46.319655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:46.369821   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:46.369859   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:46.384828   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:46.384862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:46.460755   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:46.460780   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:46.460793   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:46.543424   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:46.543459   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.089661   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:49.103781   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:49.103878   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:49.140206   70480 cri.go:89] found id: ""
	I0729 11:50:49.140234   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.140242   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:49.140248   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:49.140306   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:49.181047   70480 cri.go:89] found id: ""
	I0729 11:50:49.181077   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.181087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:49.181094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:49.181160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:49.217106   70480 cri.go:89] found id: ""
	I0729 11:50:49.217135   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.217145   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:49.217152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:49.217213   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:49.256920   70480 cri.go:89] found id: ""
	I0729 11:50:49.256955   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.256966   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:49.256973   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:49.257040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:49.294586   70480 cri.go:89] found id: ""
	I0729 11:50:49.294610   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.294618   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:49.294623   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:49.294687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:49.332441   70480 cri.go:89] found id: ""
	I0729 11:50:49.332467   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.332475   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:49.332480   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:49.332538   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:49.370255   70480 cri.go:89] found id: ""
	I0729 11:50:49.370281   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.370288   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:49.370293   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:49.370348   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:49.407106   70480 cri.go:89] found id: ""
	I0729 11:50:49.407142   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.407150   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:49.407158   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:49.407170   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:49.487262   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:49.487294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.530381   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:49.530407   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:49.585601   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:49.585640   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:49.608868   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:49.608909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:49.717369   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:50.513962   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.514789   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.181505   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.681593   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.483654   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.978973   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:54.979504   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.218194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:52.232762   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:52.232830   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:52.269399   70480 cri.go:89] found id: ""
	I0729 11:50:52.269427   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.269435   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:52.269441   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:52.269488   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:52.304375   70480 cri.go:89] found id: ""
	I0729 11:50:52.304405   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.304415   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:52.304421   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:52.304471   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:52.339374   70480 cri.go:89] found id: ""
	I0729 11:50:52.339406   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.339423   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:52.339431   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:52.339490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:52.375675   70480 cri.go:89] found id: ""
	I0729 11:50:52.375704   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.375715   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:52.375724   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:52.375785   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:52.411587   70480 cri.go:89] found id: ""
	I0729 11:50:52.411612   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.411620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:52.411625   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:52.411677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:52.445267   70480 cri.go:89] found id: ""
	I0729 11:50:52.445291   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.445301   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:52.445308   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:52.445367   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:52.491320   70480 cri.go:89] found id: ""
	I0729 11:50:52.491352   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.491361   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:52.491376   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:52.491432   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:52.528158   70480 cri.go:89] found id: ""
	I0729 11:50:52.528195   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.528205   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:52.528214   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:52.528229   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:52.584122   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:52.584156   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:52.598572   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:52.598611   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:52.675433   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:52.675451   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:52.675473   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:52.759393   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:52.759433   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:55.014201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.015293   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.181456   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.680557   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:56.980460   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:58.982179   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.300231   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:55.316902   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:55.316972   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:55.362311   70480 cri.go:89] found id: ""
	I0729 11:50:55.362350   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.362360   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:55.362368   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:55.362434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:55.420481   70480 cri.go:89] found id: ""
	I0729 11:50:55.420506   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.420519   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:55.420524   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:55.420582   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:55.472515   70480 cri.go:89] found id: ""
	I0729 11:50:55.472546   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.472556   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:55.472565   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:55.472625   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:55.513203   70480 cri.go:89] found id: ""
	I0729 11:50:55.513224   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.513232   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:55.513237   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:55.513290   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:55.548410   70480 cri.go:89] found id: ""
	I0729 11:50:55.548440   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.548450   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:55.548457   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:55.548517   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:55.584532   70480 cri.go:89] found id: ""
	I0729 11:50:55.584561   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.584571   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:55.584577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:55.584640   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:55.627593   70480 cri.go:89] found id: ""
	I0729 11:50:55.627623   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.627652   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:55.627660   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:55.627723   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:55.667981   70480 cri.go:89] found id: ""
	I0729 11:50:55.668005   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.668014   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:55.668021   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:55.668050   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:55.721569   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:55.721605   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:55.735570   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:55.735598   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:55.810549   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:55.810578   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:55.810590   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:55.892547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:55.892594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:58.435946   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:58.449628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:58.449693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:58.487449   70480 cri.go:89] found id: ""
	I0729 11:50:58.487481   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.487499   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:58.487507   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:58.487574   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:58.524030   70480 cri.go:89] found id: ""
	I0729 11:50:58.524051   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.524058   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:58.524063   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:58.524118   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:58.560348   70480 cri.go:89] found id: ""
	I0729 11:50:58.560374   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.560381   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:58.560386   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:58.560434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:58.598947   70480 cri.go:89] found id: ""
	I0729 11:50:58.598974   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.598984   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:58.598992   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:58.599050   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:58.634763   70480 cri.go:89] found id: ""
	I0729 11:50:58.634789   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.634799   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:58.634807   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:58.634867   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:58.671609   70480 cri.go:89] found id: ""
	I0729 11:50:58.671639   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.671649   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:58.671656   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:58.671715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:58.712629   70480 cri.go:89] found id: ""
	I0729 11:50:58.712654   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.712661   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:58.712669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:58.712719   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:58.749749   70480 cri.go:89] found id: ""
	I0729 11:50:58.749779   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.749788   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:58.749799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:58.749813   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:58.807124   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:58.807159   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:58.821486   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:58.821513   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:58.889226   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:58.889248   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:58.889263   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:58.968593   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:58.968633   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:59.515675   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.015006   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:59.681443   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.181409   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:04.183067   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.482470   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:03.482794   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.511112   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:01.525124   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:01.525221   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:01.559415   70480 cri.go:89] found id: ""
	I0729 11:51:01.559450   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.559462   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:01.559469   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:01.559530   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:01.593614   70480 cri.go:89] found id: ""
	I0729 11:51:01.593644   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.593655   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:01.593661   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:01.593722   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:01.632365   70480 cri.go:89] found id: ""
	I0729 11:51:01.632398   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.632409   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:01.632416   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:01.632476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:01.670518   70480 cri.go:89] found id: ""
	I0729 11:51:01.670543   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.670550   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:01.670557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:01.670618   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:01.709732   70480 cri.go:89] found id: ""
	I0729 11:51:01.709755   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.709762   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:01.709768   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:01.709813   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:01.751710   70480 cri.go:89] found id: ""
	I0729 11:51:01.751739   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.751749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:01.751756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:01.751818   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:01.795819   70480 cri.go:89] found id: ""
	I0729 11:51:01.795848   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.795859   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:01.795867   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:01.795931   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:01.838654   70480 cri.go:89] found id: ""
	I0729 11:51:01.838684   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.838691   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:01.838719   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:01.838734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:01.894328   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:01.894370   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:01.908870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:01.908894   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:01.982740   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:01.982770   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:01.982785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:02.068332   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:02.068376   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.614758   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:04.629646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:04.629708   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:04.666502   70480 cri.go:89] found id: ""
	I0729 11:51:04.666526   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.666534   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:04.666540   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:04.666590   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:04.705373   70480 cri.go:89] found id: ""
	I0729 11:51:04.705398   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.705407   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:04.705414   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:04.705468   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:04.745076   70480 cri.go:89] found id: ""
	I0729 11:51:04.745110   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.745122   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:04.745130   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:04.745195   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:04.786923   70480 cri.go:89] found id: ""
	I0729 11:51:04.786953   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.786963   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:04.786971   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:04.787031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:04.824483   70480 cri.go:89] found id: ""
	I0729 11:51:04.824514   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.824522   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:04.824530   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:04.824591   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:04.861347   70480 cri.go:89] found id: ""
	I0729 11:51:04.861379   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.861390   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:04.861399   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:04.861456   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:04.902102   70480 cri.go:89] found id: ""
	I0729 11:51:04.902136   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.902143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:04.902149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:04.902206   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:04.942533   70480 cri.go:89] found id: ""
	I0729 11:51:04.942560   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.942568   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:04.942576   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:04.942589   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:04.997272   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:04.997309   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:05.012276   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:05.012307   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:05.088408   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:05.088434   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:05.088462   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:05.173414   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:05.173452   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.514092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.016150   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:06.680804   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:08.681656   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:05.978846   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.979974   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.718649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:07.732209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:07.732282   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:07.767128   70480 cri.go:89] found id: ""
	I0729 11:51:07.767157   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.767164   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:07.767170   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:07.767233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:07.803210   70480 cri.go:89] found id: ""
	I0729 11:51:07.803243   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.803253   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:07.803260   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:07.803323   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:07.839694   70480 cri.go:89] found id: ""
	I0729 11:51:07.839718   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.839726   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:07.839732   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:07.839779   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:07.876660   70480 cri.go:89] found id: ""
	I0729 11:51:07.876687   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.876695   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:07.876701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:07.876758   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:07.923089   70480 cri.go:89] found id: ""
	I0729 11:51:07.923119   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.923128   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:07.923139   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:07.923191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:07.961178   70480 cri.go:89] found id: ""
	I0729 11:51:07.961205   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.961214   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:07.961223   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:07.961283   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:08.001007   70480 cri.go:89] found id: ""
	I0729 11:51:08.001031   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.001038   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:08.001047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:08.001115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:08.036930   70480 cri.go:89] found id: ""
	I0729 11:51:08.036956   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.036964   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:08.036972   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:08.036982   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:08.091405   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:08.091440   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:08.106456   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:08.106483   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:08.181814   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:08.181835   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:08.181846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:08.267663   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:08.267701   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:09.514482   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.514970   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.182959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:13.680925   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.481614   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:12.482016   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:14.980848   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.814602   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:10.828290   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:10.828351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:10.866374   70480 cri.go:89] found id: ""
	I0729 11:51:10.866398   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.866406   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:10.866412   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:10.866457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:10.908253   70480 cri.go:89] found id: ""
	I0729 11:51:10.908286   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.908295   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:10.908301   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:10.908370   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:10.952686   70480 cri.go:89] found id: ""
	I0729 11:51:10.952709   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.952717   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:10.952723   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:10.952771   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:10.991621   70480 cri.go:89] found id: ""
	I0729 11:51:10.991650   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.991661   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:10.991668   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:10.991728   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:11.028420   70480 cri.go:89] found id: ""
	I0729 11:51:11.028451   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.028462   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:11.028469   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:11.028520   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:11.065213   70480 cri.go:89] found id: ""
	I0729 11:51:11.065248   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.065259   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:11.065266   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:11.065328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:11.104008   70480 cri.go:89] found id: ""
	I0729 11:51:11.104051   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.104064   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:11.104073   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:11.104134   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:11.140872   70480 cri.go:89] found id: ""
	I0729 11:51:11.140913   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.140925   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:11.140936   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:11.140958   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:11.222498   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:11.222535   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:11.265869   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:11.265909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:11.319889   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:11.319926   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:11.334069   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:11.334100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:11.412461   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:13.913194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:13.927057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:13.927141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:13.963267   70480 cri.go:89] found id: ""
	I0729 11:51:13.963302   70480 logs.go:276] 0 containers: []
	W0729 11:51:13.963312   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:13.963321   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:13.963386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:14.002591   70480 cri.go:89] found id: ""
	I0729 11:51:14.002621   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.002633   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:14.002640   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:14.002737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:14.041377   70480 cri.go:89] found id: ""
	I0729 11:51:14.041410   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.041422   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:14.041437   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:14.041502   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:14.081794   70480 cri.go:89] found id: ""
	I0729 11:51:14.081821   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.081829   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:14.081835   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:14.081888   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:14.121223   70480 cri.go:89] found id: ""
	I0729 11:51:14.121251   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.121261   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:14.121269   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:14.121333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:14.156762   70480 cri.go:89] found id: ""
	I0729 11:51:14.156798   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.156808   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:14.156817   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:14.156881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:14.195068   70480 cri.go:89] found id: ""
	I0729 11:51:14.195098   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.195108   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:14.195115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:14.195185   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:14.233361   70480 cri.go:89] found id: ""
	I0729 11:51:14.233392   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.233402   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:14.233413   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:14.233428   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:14.289276   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:14.289318   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:14.304505   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:14.304536   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:14.376648   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:14.376673   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:14.376685   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:14.453538   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:14.453574   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:14.016205   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:16.514374   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.514902   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:15.681382   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.181597   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.479865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:19.480304   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.004150   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:17.018324   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:17.018401   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:17.056923   70480 cri.go:89] found id: ""
	I0729 11:51:17.056959   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.056970   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:17.056978   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:17.057042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:17.093329   70480 cri.go:89] found id: ""
	I0729 11:51:17.093362   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.093374   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:17.093381   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:17.093443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:17.130340   70480 cri.go:89] found id: ""
	I0729 11:51:17.130372   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.130382   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:17.130391   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:17.130457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:17.164877   70480 cri.go:89] found id: ""
	I0729 11:51:17.164902   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.164910   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:17.164915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:17.164962   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:17.201509   70480 cri.go:89] found id: ""
	I0729 11:51:17.201538   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.201549   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:17.201555   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:17.201629   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:17.242097   70480 cri.go:89] found id: ""
	I0729 11:51:17.242121   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.242130   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:17.242136   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:17.242182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:17.277239   70480 cri.go:89] found id: ""
	I0729 11:51:17.277262   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.277270   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:17.277279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:17.277328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:17.310991   70480 cri.go:89] found id: ""
	I0729 11:51:17.311022   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.311034   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:17.311046   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:17.311061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:17.386672   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:17.386718   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:17.424880   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:17.424905   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:17.477226   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:17.477255   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:17.492327   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:17.492354   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:17.570971   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.071148   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:20.085734   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:20.085821   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:20.120455   70480 cri.go:89] found id: ""
	I0729 11:51:20.120500   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.120512   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:20.120530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:20.120589   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:20.165159   70480 cri.go:89] found id: ""
	I0729 11:51:20.165186   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.165193   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:20.165199   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:20.165245   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:20.202147   70480 cri.go:89] found id: ""
	I0729 11:51:20.202168   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.202175   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:20.202182   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:20.202237   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:20.515560   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.014288   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.681542   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.181158   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:21.978106   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.979809   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.237783   70480 cri.go:89] found id: ""
	I0729 11:51:20.237811   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.237822   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:20.237829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:20.237890   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:20.280812   70480 cri.go:89] found id: ""
	I0729 11:51:20.280839   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.280852   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:20.280858   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:20.280922   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:20.317904   70480 cri.go:89] found id: ""
	I0729 11:51:20.317925   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.317932   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:20.317938   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:20.317986   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:20.356106   70480 cri.go:89] found id: ""
	I0729 11:51:20.356136   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.356143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:20.356149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:20.356197   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:20.393483   70480 cri.go:89] found id: ""
	I0729 11:51:20.393514   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.393526   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:20.393537   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:20.393552   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:20.446650   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:20.446716   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:20.460502   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:20.460531   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:20.535717   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.535738   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:20.535751   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:20.619068   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:20.619119   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.159775   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:23.174688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:23.174776   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:23.210990   70480 cri.go:89] found id: ""
	I0729 11:51:23.211017   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.211025   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:23.211031   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:23.211083   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:23.251468   70480 cri.go:89] found id: ""
	I0729 11:51:23.251494   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.251505   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:23.251512   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:23.251567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:23.288902   70480 cri.go:89] found id: ""
	I0729 11:51:23.288950   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.288961   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:23.288969   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:23.289028   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:23.324544   70480 cri.go:89] found id: ""
	I0729 11:51:23.324583   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.324593   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:23.324604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:23.324681   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:23.363138   70480 cri.go:89] found id: ""
	I0729 11:51:23.363170   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.363180   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:23.363188   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:23.363246   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:23.400107   70480 cri.go:89] found id: ""
	I0729 11:51:23.400136   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.400146   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:23.400163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:23.400224   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:23.437129   70480 cri.go:89] found id: ""
	I0729 11:51:23.437169   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.437180   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:23.437189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:23.437251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:23.470782   70480 cri.go:89] found id: ""
	I0729 11:51:23.470811   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.470821   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:23.470831   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:23.470846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:23.557806   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:23.557843   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.601312   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:23.601342   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:23.658042   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:23.658084   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:23.682844   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:23.682878   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:23.773049   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:25.015099   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.518243   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:25.680468   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.680741   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.479529   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:28.978442   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.273263   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:26.286759   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:26.286822   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:26.325303   70480 cri.go:89] found id: ""
	I0729 11:51:26.325330   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.325340   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:26.325347   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:26.325402   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:26.366994   70480 cri.go:89] found id: ""
	I0729 11:51:26.367019   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.367033   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:26.367040   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:26.367100   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:26.407748   70480 cri.go:89] found id: ""
	I0729 11:51:26.407779   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.407789   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:26.407796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:26.407856   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:26.443172   70480 cri.go:89] found id: ""
	I0729 11:51:26.443197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.443206   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:26.443214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:26.443275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:26.479906   70480 cri.go:89] found id: ""
	I0729 11:51:26.479928   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.479937   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:26.479945   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:26.480011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:26.518837   70480 cri.go:89] found id: ""
	I0729 11:51:26.518867   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.518877   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:26.518884   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:26.518939   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:26.557168   70480 cri.go:89] found id: ""
	I0729 11:51:26.557197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.557207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:26.557214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:26.557271   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:26.591670   70480 cri.go:89] found id: ""
	I0729 11:51:26.591699   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.591707   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:26.591715   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:26.591727   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:26.606611   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:26.606641   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:26.675726   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:26.675752   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:26.675768   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:26.755738   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:26.755776   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:26.799482   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:26.799518   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:29.353415   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:29.367062   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:29.367141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:29.404628   70480 cri.go:89] found id: ""
	I0729 11:51:29.404669   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.404677   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:29.404683   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:29.404731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:29.452828   70480 cri.go:89] found id: ""
	I0729 11:51:29.452858   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.452868   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:29.452877   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:29.452936   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:29.488252   70480 cri.go:89] found id: ""
	I0729 11:51:29.488280   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.488288   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:29.488296   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:29.488357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:29.524837   70480 cri.go:89] found id: ""
	I0729 11:51:29.524863   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.524874   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:29.524890   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:29.524938   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:29.564545   70480 cri.go:89] found id: ""
	I0729 11:51:29.564587   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.564598   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:29.564615   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:29.564686   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:29.604254   70480 cri.go:89] found id: ""
	I0729 11:51:29.604282   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.604292   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:29.604299   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:29.604365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:29.640107   70480 cri.go:89] found id: ""
	I0729 11:51:29.640137   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.640147   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:29.640152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:29.640207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:29.677070   70480 cri.go:89] found id: ""
	I0729 11:51:29.677099   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.677109   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:29.677119   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:29.677143   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:29.692434   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:29.692466   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:29.769317   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:29.769344   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:29.769355   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:29.850468   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:29.850505   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:29.890975   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:29.891010   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:30.014896   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.014991   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:29.682442   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.181766   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:34.182032   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:30.979636   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:33.480377   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.445320   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:32.459458   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:32.459534   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:32.500629   70480 cri.go:89] found id: ""
	I0729 11:51:32.500652   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.500659   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:32.500664   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:32.500711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:32.538737   70480 cri.go:89] found id: ""
	I0729 11:51:32.538763   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.538771   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:32.538777   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:32.538837   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:32.578093   70480 cri.go:89] found id: ""
	I0729 11:51:32.578124   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.578134   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:32.578141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:32.578200   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:32.615501   70480 cri.go:89] found id: ""
	I0729 11:51:32.615527   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.615539   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:32.615547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:32.615597   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:32.653232   70480 cri.go:89] found id: ""
	I0729 11:51:32.653262   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.653272   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:32.653279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:32.653338   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:32.691385   70480 cri.go:89] found id: ""
	I0729 11:51:32.691408   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.691418   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:32.691440   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:32.691486   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:32.727625   70480 cri.go:89] found id: ""
	I0729 11:51:32.727657   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.727667   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:32.727674   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:32.727737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:32.762200   70480 cri.go:89] found id: ""
	I0729 11:51:32.762225   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.762232   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:32.762240   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:32.762251   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:32.815020   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:32.815061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:32.830072   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:32.830100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:32.902248   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:32.902278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:32.902292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:32.984571   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:32.984604   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:34.513960   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.514684   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.515512   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.680403   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.681176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.979834   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.482035   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.528850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:35.543568   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:35.543633   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:35.583537   70480 cri.go:89] found id: ""
	I0729 11:51:35.583564   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.583571   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:35.583578   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:35.583632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:35.621997   70480 cri.go:89] found id: ""
	I0729 11:51:35.622021   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.622028   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:35.622034   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:35.622090   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:35.659062   70480 cri.go:89] found id: ""
	I0729 11:51:35.659091   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.659102   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:35.659109   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:35.659169   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:35.696644   70480 cri.go:89] found id: ""
	I0729 11:51:35.696679   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.696689   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:35.696701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:35.696753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:35.734321   70480 cri.go:89] found id: ""
	I0729 11:51:35.734348   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.734358   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:35.734366   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:35.734426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:35.771540   70480 cri.go:89] found id: ""
	I0729 11:51:35.771574   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.771586   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:35.771604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:35.771684   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:35.811293   70480 cri.go:89] found id: ""
	I0729 11:51:35.811318   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.811326   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:35.811332   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:35.811386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:35.857175   70480 cri.go:89] found id: ""
	I0729 11:51:35.857206   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.857217   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:35.857228   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:35.857242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:35.946191   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:35.946210   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:35.946225   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:36.033466   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:36.033501   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:36.072593   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:36.072622   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:36.129193   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:36.129228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:38.645464   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:38.659318   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:38.659378   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:38.697259   70480 cri.go:89] found id: ""
	I0729 11:51:38.697289   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.697298   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:38.697304   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:38.697351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:38.736722   70480 cri.go:89] found id: ""
	I0729 11:51:38.736751   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.736763   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:38.736770   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:38.736828   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:38.773508   70480 cri.go:89] found id: ""
	I0729 11:51:38.773532   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.773539   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:38.773545   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:38.773607   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:38.813147   70480 cri.go:89] found id: ""
	I0729 11:51:38.813177   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.813186   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:38.813193   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:38.813249   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:38.850588   70480 cri.go:89] found id: ""
	I0729 11:51:38.850621   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.850631   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:38.850639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:38.850694   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:38.888260   70480 cri.go:89] found id: ""
	I0729 11:51:38.888293   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.888304   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:38.888313   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:38.888380   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:38.924326   70480 cri.go:89] found id: ""
	I0729 11:51:38.924352   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.924360   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:38.924365   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:38.924426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:38.963356   70480 cri.go:89] found id: ""
	I0729 11:51:38.963386   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.963397   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:38.963408   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:38.963425   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:39.048438   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:39.048472   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:39.087799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:39.087828   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:39.141908   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:39.141945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:39.156242   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:39.156282   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:39.231689   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:41.014799   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.015914   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.180241   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.180737   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:40.980126   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.480593   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.732860   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:41.752371   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:41.752451   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:41.804525   70480 cri.go:89] found id: ""
	I0729 11:51:41.804553   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.804565   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:41.804575   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:41.804632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:41.859983   70480 cri.go:89] found id: ""
	I0729 11:51:41.860010   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.860018   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:41.860024   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:41.860081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:41.909592   70480 cri.go:89] found id: ""
	I0729 11:51:41.909622   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.909632   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:41.909639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:41.909700   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:41.949892   70480 cri.go:89] found id: ""
	I0729 11:51:41.949919   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.949928   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:41.949933   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:41.950011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:41.985334   70480 cri.go:89] found id: ""
	I0729 11:51:41.985360   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.985368   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:41.985374   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:41.985426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:42.028747   70480 cri.go:89] found id: ""
	I0729 11:51:42.028806   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.028818   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:42.028829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:42.028899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:42.063927   70480 cri.go:89] found id: ""
	I0729 11:51:42.063955   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.063965   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:42.063972   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:42.064031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:42.099539   70480 cri.go:89] found id: ""
	I0729 11:51:42.099566   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.099577   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:42.099587   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:42.099602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:42.112852   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:42.112879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:42.185104   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:42.185130   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:42.185142   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:42.265744   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:42.265778   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:42.309451   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:42.309478   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.862271   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:44.876763   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:44.876832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:44.921157   70480 cri.go:89] found id: ""
	I0729 11:51:44.921188   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.921198   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:44.921206   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:44.921266   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:44.960222   70480 cri.go:89] found id: ""
	I0729 11:51:44.960255   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.960265   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:44.960272   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:44.960334   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:44.999994   70480 cri.go:89] found id: ""
	I0729 11:51:45.000025   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.000036   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:45.000045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:45.000108   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:45.037110   70480 cri.go:89] found id: ""
	I0729 11:51:45.037145   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.037156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:45.037163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:45.037215   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:45.074406   70480 cri.go:89] found id: ""
	I0729 11:51:45.074430   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.074440   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:45.074447   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:45.074508   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:45.113069   70480 cri.go:89] found id: ""
	I0729 11:51:45.113097   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.113107   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:45.113115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:45.113178   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:45.158016   70480 cri.go:89] found id: ""
	I0729 11:51:45.158045   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.158055   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:45.158063   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:45.158115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:45.199257   70480 cri.go:89] found id: ""
	I0729 11:51:45.199286   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.199294   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:45.199303   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:45.199314   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.509117   69907 pod_ready.go:81] duration metric: took 4m0.000903528s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	E0729 11:51:44.509148   69907 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:51:44.509164   69907 pod_ready.go:38] duration metric: took 4m6.540840543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:51:44.509191   69907 kubeadm.go:597] duration metric: took 4m16.180899614s to restartPrimaryControlPlane
	W0729 11:51:44.509250   69907 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:51:44.509278   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:51:45.181697   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.682106   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.979275   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.979316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.254060   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:45.254100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:45.269592   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:45.269630   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:45.357469   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:45.357493   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:45.357508   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:45.437760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:45.437806   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:47.980407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:47.998789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:47.998874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:48.039278   70480 cri.go:89] found id: ""
	I0729 11:51:48.039311   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.039321   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:48.039328   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:48.039391   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:48.080277   70480 cri.go:89] found id: ""
	I0729 11:51:48.080312   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.080324   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:48.080332   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:48.080395   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:48.119009   70480 cri.go:89] found id: ""
	I0729 11:51:48.119032   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.119039   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:48.119045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:48.119091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:48.162062   70480 cri.go:89] found id: ""
	I0729 11:51:48.162091   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.162101   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:48.162108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:48.162175   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:48.208089   70480 cri.go:89] found id: ""
	I0729 11:51:48.208120   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.208141   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:48.208148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:48.208214   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:48.254174   70480 cri.go:89] found id: ""
	I0729 11:51:48.254206   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.254217   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:48.254225   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:48.254288   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:48.294959   70480 cri.go:89] found id: ""
	I0729 11:51:48.294988   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.294998   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:48.295005   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:48.295067   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:48.337176   70480 cri.go:89] found id: ""
	I0729 11:51:48.337209   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.337221   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:48.337231   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:48.337249   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:48.392826   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:48.392879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:48.409017   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:48.409043   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:48.489964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:48.489995   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:48.490012   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:48.571448   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:48.571496   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:50.180914   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.181136   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:50.479880   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.977753   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:54.978456   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:51.124524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:51.137808   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:51.137887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:51.178622   70480 cri.go:89] found id: ""
	I0729 11:51:51.178647   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.178656   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:51.178663   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:51.178738   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:51.214985   70480 cri.go:89] found id: ""
	I0729 11:51:51.215008   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.215015   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:51.215021   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:51.215071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:51.250529   70480 cri.go:89] found id: ""
	I0729 11:51:51.250575   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.250586   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:51.250594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:51.250648   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:51.284745   70480 cri.go:89] found id: ""
	I0729 11:51:51.284774   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.284781   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:51.284787   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:51.284844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:51.319448   70480 cri.go:89] found id: ""
	I0729 11:51:51.319476   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.319486   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:51.319494   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:51.319559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:51.358828   70480 cri.go:89] found id: ""
	I0729 11:51:51.358861   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.358868   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:51.358875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:51.358934   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:51.398326   70480 cri.go:89] found id: ""
	I0729 11:51:51.398356   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.398363   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:51.398369   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:51.398424   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:51.450495   70480 cri.go:89] found id: ""
	I0729 11:51:51.450523   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.450530   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:51.450539   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:51.450549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:51.495351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:51.495381   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:51.545937   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:51.545972   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:51.560738   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:51.560769   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:51.640187   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:51.640209   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:51.640223   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.230801   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:54.244231   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:54.244307   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:54.279319   70480 cri.go:89] found id: ""
	I0729 11:51:54.279349   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.279359   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:54.279366   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:54.279426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:54.316648   70480 cri.go:89] found id: ""
	I0729 11:51:54.316675   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.316685   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:54.316691   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:54.316751   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:54.353605   70480 cri.go:89] found id: ""
	I0729 11:51:54.353631   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.353641   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:54.353646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:54.353705   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:54.389695   70480 cri.go:89] found id: ""
	I0729 11:51:54.389724   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.389734   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:54.389739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:54.389789   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:54.425572   70480 cri.go:89] found id: ""
	I0729 11:51:54.425610   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.425620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:54.425627   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:54.425693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:54.466041   70480 cri.go:89] found id: ""
	I0729 11:51:54.466084   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.466101   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:54.466111   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:54.466160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:54.504916   70480 cri.go:89] found id: ""
	I0729 11:51:54.504943   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.504950   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:54.504956   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:54.505003   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:54.542093   70480 cri.go:89] found id: ""
	I0729 11:51:54.542133   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.542141   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:54.542149   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:54.542161   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:54.555521   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:54.555549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:54.639080   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:54.639104   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:54.639115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.721817   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:54.721858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:54.760794   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:54.760831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:54.681184   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.179812   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.180919   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:56.978928   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.479018   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.314549   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:57.328865   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:57.328941   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:57.364546   70480 cri.go:89] found id: ""
	I0729 11:51:57.364577   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.364587   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:57.364594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:57.364665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:57.401965   70480 cri.go:89] found id: ""
	I0729 11:51:57.401994   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.402005   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:57.402013   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:57.402072   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:57.436918   70480 cri.go:89] found id: ""
	I0729 11:51:57.436942   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.436975   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:57.436983   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:57.437042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:57.471476   70480 cri.go:89] found id: ""
	I0729 11:51:57.471503   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.471511   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:57.471519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:57.471576   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:57.509934   70480 cri.go:89] found id: ""
	I0729 11:51:57.509962   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.509972   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:57.509980   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:57.510038   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:57.552095   70480 cri.go:89] found id: ""
	I0729 11:51:57.552123   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.552133   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:57.552141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:57.552204   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:57.586486   70480 cri.go:89] found id: ""
	I0729 11:51:57.586507   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.586514   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:57.586519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:57.586580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:57.622708   70480 cri.go:89] found id: ""
	I0729 11:51:57.622737   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.622746   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:57.622757   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:57.622771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:57.637102   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:57.637133   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:57.710960   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:57.710981   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:57.710994   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:57.803522   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:57.803559   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:57.845804   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:57.845838   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:01.680142   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.682844   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:01.978739   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.973441   70231 pod_ready.go:81] duration metric: took 4m0.000922355s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:03.973469   70231 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:03.973488   70231 pod_ready.go:38] duration metric: took 4m6.983171556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:03.973523   70231 kubeadm.go:597] duration metric: took 4m14.830269847s to restartPrimaryControlPlane
	W0729 11:52:03.973614   70231 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:03.973646   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:00.398227   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:00.412064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:00.412139   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:00.446661   70480 cri.go:89] found id: ""
	I0729 11:52:00.446716   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.446729   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:00.446741   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:00.446793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:00.482234   70480 cri.go:89] found id: ""
	I0729 11:52:00.482260   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.482270   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:00.482290   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:00.482357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:00.520087   70480 cri.go:89] found id: ""
	I0729 11:52:00.520125   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.520136   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:00.520143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:00.520203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:00.556889   70480 cri.go:89] found id: ""
	I0729 11:52:00.556913   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.556924   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:00.556931   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:00.556996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:00.593521   70480 cri.go:89] found id: ""
	I0729 11:52:00.593559   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.593569   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:00.593577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:00.593644   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:00.631849   70480 cri.go:89] found id: ""
	I0729 11:52:00.631879   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.631889   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:00.631897   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:00.631956   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:00.669206   70480 cri.go:89] found id: ""
	I0729 11:52:00.669235   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.669246   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:00.669254   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:00.669314   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:00.707653   70480 cri.go:89] found id: ""
	I0729 11:52:00.707681   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.707692   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:00.707702   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:00.707720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:00.722275   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:00.722305   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:00.796212   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:00.796240   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:00.796254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:00.882088   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:00.882135   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:00.924217   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:00.924248   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.479929   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:03.493148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:03.493211   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:03.528114   70480 cri.go:89] found id: ""
	I0729 11:52:03.528158   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.528169   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:03.528177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:03.528233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:03.564509   70480 cri.go:89] found id: ""
	I0729 11:52:03.564542   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.564552   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:03.564559   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:03.564628   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:03.599850   70480 cri.go:89] found id: ""
	I0729 11:52:03.599884   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.599897   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:03.599913   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:03.599977   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:03.638988   70480 cri.go:89] found id: ""
	I0729 11:52:03.639020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.639031   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:03.639039   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:03.639112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:03.678544   70480 cri.go:89] found id: ""
	I0729 11:52:03.678572   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.678581   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:03.678589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:03.678651   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:03.717265   70480 cri.go:89] found id: ""
	I0729 11:52:03.717297   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.717307   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:03.717314   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:03.717379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:03.756479   70480 cri.go:89] found id: ""
	I0729 11:52:03.756504   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.756512   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:03.756519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:03.756570   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:03.791992   70480 cri.go:89] found id: ""
	I0729 11:52:03.792020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.792031   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:03.792042   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:03.792056   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:03.881378   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:03.881417   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:03.918866   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:03.918902   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.972005   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:03.972041   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:03.989650   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:03.989680   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:04.069522   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:06.182277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:08.681543   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:06.570567   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:06.588524   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:06.588594   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:06.627028   70480 cri.go:89] found id: ""
	I0729 11:52:06.627071   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.627082   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:06.627089   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:06.627154   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:06.669504   70480 cri.go:89] found id: ""
	I0729 11:52:06.669536   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.669548   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:06.669558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:06.669620   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:06.709546   70480 cri.go:89] found id: ""
	I0729 11:52:06.709573   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.709593   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:06.709603   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:06.709661   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:06.745539   70480 cri.go:89] found id: ""
	I0729 11:52:06.745568   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.745577   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:06.745585   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:06.745645   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:06.783924   70480 cri.go:89] found id: ""
	I0729 11:52:06.783960   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.783971   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:06.783978   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:06.784040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:06.819838   70480 cri.go:89] found id: ""
	I0729 11:52:06.819869   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.819879   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:06.819886   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:06.819950   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:06.858057   70480 cri.go:89] found id: ""
	I0729 11:52:06.858085   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.858102   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:06.858110   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:06.858186   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:06.896844   70480 cri.go:89] found id: ""
	I0729 11:52:06.896875   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.896885   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:06.896896   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:06.896911   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:06.953126   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:06.953166   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:06.967548   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:06.967579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:07.046716   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:07.046740   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:07.046756   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:07.129264   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:07.129299   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:09.672314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:09.687133   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:09.687202   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:09.728280   70480 cri.go:89] found id: ""
	I0729 11:52:09.728307   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.728316   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:09.728322   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:09.728379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:09.765178   70480 cri.go:89] found id: ""
	I0729 11:52:09.765214   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.765225   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:09.765233   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:09.765293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:09.801181   70480 cri.go:89] found id: ""
	I0729 11:52:09.801216   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.801225   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:09.801233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:09.801294   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:09.845088   70480 cri.go:89] found id: ""
	I0729 11:52:09.845118   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.845129   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:09.845137   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:09.845198   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:09.883874   70480 cri.go:89] found id: ""
	I0729 11:52:09.883907   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.883918   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:09.883925   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:09.883992   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:09.918273   70480 cri.go:89] found id: ""
	I0729 11:52:09.918302   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.918312   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:09.918321   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:09.918386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:09.978461   70480 cri.go:89] found id: ""
	I0729 11:52:09.978487   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.978494   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:09.978500   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:09.978546   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:10.022219   70480 cri.go:89] found id: ""
	I0729 11:52:10.022247   70480 logs.go:276] 0 containers: []
	W0729 11:52:10.022255   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:10.022264   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:10.022274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:10.076181   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:10.076228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:10.090567   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:10.090600   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:10.159576   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:10.159605   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:10.159620   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:11.181276   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:13.181424   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:10.242116   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:10.242165   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:12.784286   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:12.798015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:12.798111   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:12.834920   70480 cri.go:89] found id: ""
	I0729 11:52:12.834951   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.834962   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:12.834969   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:12.835030   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:12.876545   70480 cri.go:89] found id: ""
	I0729 11:52:12.876578   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.876589   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:12.876596   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:12.876665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:12.912912   70480 cri.go:89] found id: ""
	I0729 11:52:12.912937   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.912944   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:12.912950   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:12.913006   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:12.948118   70480 cri.go:89] found id: ""
	I0729 11:52:12.948148   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.948156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:12.948161   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:12.948207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:12.984339   70480 cri.go:89] found id: ""
	I0729 11:52:12.984364   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.984371   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:12.984377   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:12.984433   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:13.024870   70480 cri.go:89] found id: ""
	I0729 11:52:13.024906   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.024916   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:13.024924   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:13.024989   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:13.067951   70480 cri.go:89] found id: ""
	I0729 11:52:13.067988   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.067999   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:13.068007   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:13.068068   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:13.105095   70480 cri.go:89] found id: ""
	I0729 11:52:13.105126   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.105136   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:13.105144   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:13.105158   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:13.167486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:13.167537   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:13.183020   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:13.183046   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:13.264702   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:13.264734   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:13.264750   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:13.346760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:13.346795   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:15.887849   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:15.901560   70480 kubeadm.go:597] duration metric: took 4m4.683363388s to restartPrimaryControlPlane
	W0729 11:52:15.901628   70480 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:15.901659   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:16.377916   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.396247   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.408224   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.420998   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.421024   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.421073   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.432646   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.432712   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.444442   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.455098   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.455175   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.469621   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.482797   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.482869   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.495535   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.508873   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.508967   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.519797   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.590877   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:52:16.590933   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.768006   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.768167   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.768313   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:16.980586   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:16.523230   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.013927797s)
	I0729 11:52:16.523296   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.541674   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.553585   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.565171   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.565196   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.565237   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.575919   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.576023   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.588641   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.599947   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.600028   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.612623   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.624420   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.624486   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.639271   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.649979   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.650057   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.661423   69907 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.718013   69907 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:16.718138   69907 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.870793   69907 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.870955   69907 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.871090   69907 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:17.100094   69907 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:17.101792   69907 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:17.101895   69907 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:17.101999   69907 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:17.102129   69907 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:17.102237   69907 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:17.102339   69907 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:17.102419   69907 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:17.102523   69907 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:17.102607   69907 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:17.102731   69907 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:17.103613   69907 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:17.103841   69907 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:17.103923   69907 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.439592   69907 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.517503   69907 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:17.731672   69907 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.877789   69907 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.930274   69907 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.930777   69907 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:17.933362   69907 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:16.982617   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:16.982732   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:16.982826   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:16.982935   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:16.983015   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:16.983079   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:16.983127   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:16.983180   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:16.983230   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:16.983291   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:16.983354   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:16.983386   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:16.983433   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.523710   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.622636   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.732508   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.869921   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.892581   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.893213   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.893300   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.049043   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:17.935629   69907 out.go:204]   - Booting up control plane ...
	I0729 11:52:17.935753   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:17.935870   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:17.935955   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:17.961756   69907 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.962814   69907 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.962879   69907 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.102662   69907 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:18.102806   69907 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:15.181970   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:17.682108   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:18.051012   70480 out.go:204]   - Booting up control plane ...
	I0729 11:52:18.051192   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:18.058194   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:18.062340   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:18.063509   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:18.066121   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:52:19.116356   69907 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.010567801s
	I0729 11:52:19.116461   69907 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:24.118059   69907 kubeadm.go:310] [api-check] The API server is healthy after 5.002510977s
	I0729 11:52:24.132586   69907 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:24.148251   69907 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:24.188769   69907 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:24.188956   69907 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-731235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:24.205790   69907 kubeadm.go:310] [bootstrap-token] Using token: pvm7ux.41geojc66jibd993
	I0729 11:52:20.181703   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:22.181889   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.182317   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.207334   69907 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:24.207519   69907 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:24.213637   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:24.226771   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:24.231379   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:24.239349   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:24.248803   69907 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:24.524966   69907 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:24.961557   69907 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:25.522876   69907 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:25.523985   69907 kubeadm.go:310] 
	I0729 11:52:25.524083   69907 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:25.524093   69907 kubeadm.go:310] 
	I0729 11:52:25.524203   69907 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:25.524234   69907 kubeadm.go:310] 
	I0729 11:52:25.524273   69907 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:25.524353   69907 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:25.524441   69907 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:25.524460   69907 kubeadm.go:310] 
	I0729 11:52:25.524520   69907 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:25.524527   69907 kubeadm.go:310] 
	I0729 11:52:25.524578   69907 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:25.524584   69907 kubeadm.go:310] 
	I0729 11:52:25.524625   69907 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:25.524728   69907 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:25.524834   69907 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:25.524843   69907 kubeadm.go:310] 
	I0729 11:52:25.524957   69907 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:25.525047   69907 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:25.525054   69907 kubeadm.go:310] 
	I0729 11:52:25.525175   69907 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525314   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:25.525343   69907 kubeadm.go:310] 	--control-plane 
	I0729 11:52:25.525351   69907 kubeadm.go:310] 
	I0729 11:52:25.525449   69907 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:25.525463   69907 kubeadm.go:310] 
	I0729 11:52:25.525569   69907 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525709   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:25.526283   69907 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:25.526361   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:52:25.526378   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:25.528362   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:25.529726   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:25.546760   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:25.571336   69907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:25.571457   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-731235 minikube.k8s.io/updated_at=2024_07_29T11_52_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=embed-certs-731235 minikube.k8s.io/primary=true
	I0729 11:52:25.571460   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:25.600643   69907 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:25.771231   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.271938   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.771337   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.271880   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.772276   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.271327   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.771854   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.680959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.180277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.271904   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:29.771958   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.271342   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.771316   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.271539   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.771490   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.271537   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.771969   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.271498   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.771963   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.681002   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.180450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.271709   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:34.771968   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.271985   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.771798   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.271877   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.771950   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.271225   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.771622   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.271354   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.369678   69907 kubeadm.go:1113] duration metric: took 12.798280829s to wait for elevateKubeSystemPrivileges
	I0729 11:52:38.369716   69907 kubeadm.go:394] duration metric: took 5m10.090728575s to StartCluster
	I0729 11:52:38.369737   69907 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.369812   69907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:38.371527   69907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.371774   69907 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:38.371829   69907 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:38.371904   69907 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-731235"
	I0729 11:52:38.371925   69907 addons.go:69] Setting default-storageclass=true in profile "embed-certs-731235"
	I0729 11:52:38.371956   69907 addons.go:69] Setting metrics-server=true in profile "embed-certs-731235"
	I0729 11:52:38.371977   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:38.371991   69907 addons.go:234] Setting addon metrics-server=true in "embed-certs-731235"
	W0729 11:52:38.371999   69907 addons.go:243] addon metrics-server should already be in state true
	I0729 11:52:38.372041   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.371966   69907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-731235"
	I0729 11:52:38.371936   69907 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-731235"
	W0729 11:52:38.372204   69907 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:38.372240   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.372365   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372402   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372460   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372615   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372662   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.373455   69907 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:38.374757   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:38.388333   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0729 11:52:38.388901   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.389443   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.389467   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.389661   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0729 11:52:38.389858   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.390469   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.390499   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.390717   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.391258   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.391278   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.391622   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0729 11:52:38.391655   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.391937   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.391966   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.392511   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.392538   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.392904   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.393459   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.393491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.395933   69907 addons.go:234] Setting addon default-storageclass=true in "embed-certs-731235"
	W0729 11:52:38.395953   69907 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:38.395980   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.396342   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.396371   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.411784   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0729 11:52:38.412254   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.412549   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34211
	I0729 11:52:38.412811   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.412831   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.412911   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.413173   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413340   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.413470   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.413488   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.413830   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413997   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.414897   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0729 11:52:38.415312   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.415395   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.415753   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.415772   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.415918   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.416126   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.416663   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.416690   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.418043   69907 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:38.418047   69907 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:38.419620   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:38.419640   69907 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:38.419661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.419693   69907 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:38.419702   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:38.419714   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.423646   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424115   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424184   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424208   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424370   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.424573   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.424631   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424647   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424722   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.424821   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.425101   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.425266   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.425394   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.425528   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.432777   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0729 11:52:38.433219   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.433735   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.433759   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.434121   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.434299   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.435957   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.436176   69907 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.436195   69907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:38.436216   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.438989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439431   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.439508   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439627   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.439783   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.439929   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.440077   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.598513   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:38.627199   69907 node_ready.go:35] waiting up to 6m0s for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639168   69907 node_ready.go:49] node "embed-certs-731235" has status "Ready":"True"
	I0729 11:52:38.639199   69907 node_ready.go:38] duration metric: took 11.953793ms for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639208   69907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:38.644562   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:38.678019   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:38.678042   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:38.706214   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:52:38.706247   69907 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:52:38.745796   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.745824   69907 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:38.767879   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.778016   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.790742   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:36.181329   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:38.183254   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:39.974095   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196041477s)
	I0729 11:52:39.974096   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206172307s)
	I0729 11:52:39.974194   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974247   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974203   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974345   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974811   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974831   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974840   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974847   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974857   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.974925   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974938   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974946   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974955   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.975075   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.975165   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.975244   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976561   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.976579   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976577   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.976589   69907 addons.go:475] Verifying addon metrics-server=true in "embed-certs-731235"
	I0729 11:52:39.999773   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.999799   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.000097   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.000118   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.026995   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.236214166s)
	I0729 11:52:40.027052   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027063   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027383   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.027402   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.027412   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027422   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027387   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029105   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.029109   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029124   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.031066   69907 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner
	I0729 11:52:36.127977   70231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.15430735s)
	I0729 11:52:36.128057   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:36.147540   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:36.159519   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:36.171332   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:36.171353   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:36.171406   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:52:36.182915   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:36.183084   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:36.193912   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:52:36.203972   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:36.204036   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:36.213886   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.223205   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:36.223260   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.235379   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:52:36.245392   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:36.245461   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:36.255495   70231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:36.468759   70231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:40.032797   69907 addons.go:510] duration metric: took 1.660964221s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner]
	I0729 11:52:40.654126   69907 pod_ready.go:102] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:41.173676   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.173708   69907 pod_ready.go:81] duration metric: took 2.529122203s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.173721   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183179   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.183207   69907 pod_ready.go:81] duration metric: took 9.478224ms for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183220   69907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192149   69907 pod_ready.go:92] pod "etcd-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.192177   69907 pod_ready.go:81] duration metric: took 8.949045ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192189   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199322   69907 pod_ready.go:92] pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.199347   69907 pod_ready.go:81] duration metric: took 7.150124ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199360   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210464   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.210491   69907 pod_ready.go:81] duration metric: took 11.123649ms for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210504   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549786   69907 pod_ready.go:92] pod "kube-proxy-ch48n" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.549814   69907 pod_ready.go:81] duration metric: took 339.30332ms for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549828   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949607   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.949629   69907 pod_ready.go:81] duration metric: took 399.794484ms for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949637   69907 pod_ready.go:38] duration metric: took 3.310420523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:41.949650   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:52:41.949732   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:41.967899   69907 api_server.go:72] duration metric: took 3.596093405s to wait for apiserver process to appear ...
	I0729 11:52:41.967933   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:52:41.967957   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:52:41.973064   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:52:41.974128   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:52:41.974151   69907 api_server.go:131] duration metric: took 6.211514ms to wait for apiserver health ...
	I0729 11:52:41.974158   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:52:42.152607   69907 system_pods.go:59] 9 kube-system pods found
	I0729 11:52:42.152648   69907 system_pods.go:61] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.152656   69907 system_pods.go:61] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.152663   69907 system_pods.go:61] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.152670   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.152674   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.152680   69907 system_pods.go:61] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.152685   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.152694   69907 system_pods.go:61] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.152702   69907 system_pods.go:61] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.152714   69907 system_pods.go:74] duration metric: took 178.548453ms to wait for pod list to return data ...
	I0729 11:52:42.152728   69907 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:52:42.349148   69907 default_sa.go:45] found service account: "default"
	I0729 11:52:42.349182   69907 default_sa.go:55] duration metric: took 196.446704ms for default service account to be created ...
	I0729 11:52:42.349192   69907 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:52:42.552384   69907 system_pods.go:86] 9 kube-system pods found
	I0729 11:52:42.552416   69907 system_pods.go:89] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.552425   69907 system_pods.go:89] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.552431   69907 system_pods.go:89] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.552437   69907 system_pods.go:89] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.552442   69907 system_pods.go:89] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.552448   69907 system_pods.go:89] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.552453   69907 system_pods.go:89] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.552462   69907 system_pods.go:89] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.552472   69907 system_pods.go:89] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.552483   69907 system_pods.go:126] duration metric: took 203.284903ms to wait for k8s-apps to be running ...
	I0729 11:52:42.552492   69907 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:52:42.552546   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:42.569158   69907 system_svc.go:56] duration metric: took 16.657226ms WaitForService to wait for kubelet
	I0729 11:52:42.569186   69907 kubeadm.go:582] duration metric: took 4.19738713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:52:42.569205   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:52:42.749356   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:52:42.749385   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:52:42.749399   69907 node_conditions.go:105] duration metric: took 180.189313ms to run NodePressure ...
	I0729 11:52:42.749411   69907 start.go:241] waiting for startup goroutines ...
	I0729 11:52:42.749417   69907 start.go:246] waiting for cluster config update ...
	I0729 11:52:42.749427   69907 start.go:255] writing updated cluster config ...
	I0729 11:52:42.749672   69907 ssh_runner.go:195] Run: rm -f paused
	I0729 11:52:42.807579   69907 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:52:42.809609   69907 out.go:177] * Done! kubectl is now configured to use "embed-certs-731235" cluster and "default" namespace by default
	I0729 11:52:40.681693   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:42.685146   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.646240   70231 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:46.646305   70231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:46.646407   70231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:46.646537   70231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:46.646653   70231 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:46.646749   70231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:46.648483   70231 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:46.648572   70231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:46.648626   70231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:46.648719   70231 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:46.648820   70231 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:46.648941   70231 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:46.649013   70231 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:46.649068   70231 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:46.649121   70231 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:46.649182   70231 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:46.649248   70231 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:46.649294   70231 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:46.649378   70231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:46.649455   70231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:46.649529   70231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:46.649609   70231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:46.649693   70231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:46.649778   70231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:46.649912   70231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:46.650023   70231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:46.651575   70231 out.go:204]   - Booting up control plane ...
	I0729 11:52:46.651657   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:46.651723   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:46.651793   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:46.651893   70231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:46.651963   70231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:46.651996   70231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:46.652155   70231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:46.652258   70231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:46.652315   70231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00230111s
	I0729 11:52:46.652381   70231 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:46.652444   70231 kubeadm.go:310] [api-check] The API server is healthy after 5.502783682s
	I0729 11:52:46.652588   70231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:46.652734   70231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:46.652802   70231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:46.652991   70231 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-754486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:46.653041   70231 kubeadm.go:310] [bootstrap-token] Using token: 341fdm.tm8thttie16wi2qy
	I0729 11:52:46.654343   70231 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:46.654458   70231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:46.654555   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:46.654745   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:46.654914   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:46.655023   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:46.655094   70231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:46.655202   70231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:46.655242   70231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:46.655285   70231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:46.655293   70231 kubeadm.go:310] 
	I0729 11:52:46.655349   70231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:46.655355   70231 kubeadm.go:310] 
	I0729 11:52:46.655427   70231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:46.655433   70231 kubeadm.go:310] 
	I0729 11:52:46.655453   70231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:46.655509   70231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:46.655576   70231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:46.655586   70231 kubeadm.go:310] 
	I0729 11:52:46.655653   70231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:46.655660   70231 kubeadm.go:310] 
	I0729 11:52:46.655702   70231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:46.655708   70231 kubeadm.go:310] 
	I0729 11:52:46.655772   70231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:46.655861   70231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:46.655975   70231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:46.656000   70231 kubeadm.go:310] 
	I0729 11:52:46.656118   70231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:46.656223   70231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:46.656233   70231 kubeadm.go:310] 
	I0729 11:52:46.656344   70231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656477   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:46.656502   70231 kubeadm.go:310] 	--control-plane 
	I0729 11:52:46.656508   70231 kubeadm.go:310] 
	I0729 11:52:46.656580   70231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:46.656586   70231 kubeadm.go:310] 
	I0729 11:52:46.656669   70231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656831   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:46.656851   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:52:46.656862   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:46.659007   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:45.180215   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:47.181213   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.660238   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:46.671866   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:46.692991   70231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-754486 minikube.k8s.io/updated_at=2024_07_29T11_52_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=default-k8s-diff-port-754486 minikube.k8s.io/primary=true
	I0729 11:52:46.897228   70231 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:46.897373   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.398474   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.898225   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.397547   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.897716   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.398393   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.898110   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.680176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:51.680900   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:53.681105   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:50.397646   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:50.897618   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.398130   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.897444   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.398334   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.898233   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.397587   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.898255   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.397634   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.898138   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.182828   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:56.674072   69419 pod_ready.go:81] duration metric: took 4m0.000131876s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:56.674094   69419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:56.674113   69419 pod_ready.go:38] duration metric: took 4m9.054741116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:56.674144   69419 kubeadm.go:597] duration metric: took 4m16.587842765s to restartPrimaryControlPlane
	W0729 11:52:56.674197   69419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:56.674234   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:55.398096   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:55.897565   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.397785   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.897860   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.397925   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.897989   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.397500   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.897468   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.398228   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.483894   70231 kubeadm.go:1113] duration metric: took 12.790894124s to wait for elevateKubeSystemPrivileges
	I0729 11:52:59.483924   70231 kubeadm.go:394] duration metric: took 5m10.397319925s to StartCluster
	I0729 11:52:59.483941   70231 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.484019   70231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:59.485737   70231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.486008   70231 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:59.486074   70231 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:59.486163   70231 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486195   70231 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754486"
	I0729 11:52:59.486196   70231 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486210   70231 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486238   70231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754486"
	I0729 11:52:59.486251   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:59.486256   70231 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.486266   70231 addons.go:243] addon metrics-server should already be in state true
	W0729 11:52:59.486205   70231 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:59.486295   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486307   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486550   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486555   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486572   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486573   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486617   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.487888   70231 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:59.489501   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:59.502095   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0729 11:52:59.502614   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.502832   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0729 11:52:59.503207   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503229   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.503252   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.503805   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503829   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.504128   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504216   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504317   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.504801   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.504847   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.505348   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I0729 11:52:59.505701   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.506318   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.506342   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.506738   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.507261   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.507290   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.508065   70231 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.508084   70231 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:59.508111   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.508423   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.508462   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.526240   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0729 11:52:59.526269   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0729 11:52:59.526313   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0729 11:52:59.526654   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526763   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526826   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.527214   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527230   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527351   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527388   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527405   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527429   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527668   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527715   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527901   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.527931   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.528030   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.528913   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.528940   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.529836   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.530004   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.532077   70231 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:59.532101   70231 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:59.533597   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:59.533619   70231 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:59.533641   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.533645   70231 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:59.533659   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:59.533681   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.538047   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538082   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538654   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538669   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538679   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538686   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538693   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538864   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538889   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539065   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539239   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539237   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.539374   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.546505   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0729 11:52:59.546918   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.547428   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.547455   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.547790   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.548011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.549607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.549899   70231 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.549915   70231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:59.549934   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.553591   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.555251   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.555814   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.556005   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.556154   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.758973   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:59.809677   70231 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818208   70231 node_ready.go:49] node "default-k8s-diff-port-754486" has status "Ready":"True"
	I0729 11:52:59.818252   70231 node_ready.go:38] duration metric: took 8.523612ms for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818264   70231 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:59.825340   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:59.935053   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.954324   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:59.954346   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:59.962991   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:00.052728   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:00.052754   70231 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:00.168588   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.168620   70231 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:58.067350   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:52:58.067472   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:52:58.067690   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:00.230134   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.485028   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485062   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485424   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485447   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.485461   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485470   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485716   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485731   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.502040   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.502061   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.502386   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.502410   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.400774   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.437744399s)
	I0729 11:53:01.400842   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.400856   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401229   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401248   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.401284   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.401378   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.401387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401637   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401648   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408496   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.178316081s)
	I0729 11:53:01.408558   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408577   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.408859   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.408879   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408859   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.408904   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408917   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.409181   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.409218   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.409232   70231 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754486"
	I0729 11:53:01.409254   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.411682   70231 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 11:53:01.413048   70231 addons.go:510] duration metric: took 1.926975712s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 11:53:01.831515   70231 pod_ready.go:102] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:02.331492   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.331518   70231 pod_ready.go:81] duration metric: took 2.506145957s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.331530   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341152   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.341175   70231 pod_ready.go:81] duration metric: took 9.638268ms for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341183   70231 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346946   70231 pod_ready.go:92] pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.346971   70231 pod_ready.go:81] duration metric: took 5.77844ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346981   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351401   70231 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.351423   70231 pod_ready.go:81] duration metric: took 4.432109ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351435   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355410   70231 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.355428   70231 pod_ready.go:81] duration metric: took 3.986166ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355439   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729604   70231 pod_ready.go:92] pod "kube-proxy-7gkd8" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.729634   70231 pod_ready.go:81] duration metric: took 374.188296ms for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729653   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130027   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:03.130052   70231 pod_ready.go:81] duration metric: took 400.392433ms for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130061   70231 pod_ready.go:38] duration metric: took 3.311785643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:03.130077   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:03.130134   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:03.152134   70231 api_server.go:72] duration metric: took 3.666086394s to wait for apiserver process to appear ...
	I0729 11:53:03.152164   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:03.152188   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:53:03.157357   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:53:03.158235   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:53:03.158254   70231 api_server.go:131] duration metric: took 6.083486ms to wait for apiserver health ...
	I0729 11:53:03.158261   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:03.333517   70231 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:03.333547   70231 system_pods.go:61] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.333552   70231 system_pods.go:61] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.333556   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.333559   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.333563   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.333566   70231 system_pods.go:61] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.333568   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.333574   70231 system_pods.go:61] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.333577   70231 system_pods.go:61] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.333586   70231 system_pods.go:74] duration metric: took 175.319992ms to wait for pod list to return data ...
	I0729 11:53:03.333595   70231 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:03.529964   70231 default_sa.go:45] found service account: "default"
	I0729 11:53:03.529989   70231 default_sa.go:55] duration metric: took 196.388041ms for default service account to be created ...
	I0729 11:53:03.529998   70231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:03.733015   70231 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:03.733051   70231 system_pods.go:89] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.733058   70231 system_pods.go:89] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.733062   70231 system_pods.go:89] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.733066   70231 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.733070   70231 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.733075   70231 system_pods.go:89] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.733081   70231 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.733090   70231 system_pods.go:89] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.733097   70231 system_pods.go:89] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.733108   70231 system_pods.go:126] duration metric: took 203.104097ms to wait for k8s-apps to be running ...
	I0729 11:53:03.733121   70231 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:03.733165   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:03.749014   70231 system_svc.go:56] duration metric: took 15.886799ms WaitForService to wait for kubelet
	I0729 11:53:03.749045   70231 kubeadm.go:582] duration metric: took 4.263001752s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:03.749070   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:03.930356   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:03.930380   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:03.930390   70231 node_conditions.go:105] duration metric: took 181.31486ms to run NodePressure ...
	I0729 11:53:03.930399   70231 start.go:241] waiting for startup goroutines ...
	I0729 11:53:03.930406   70231 start.go:246] waiting for cluster config update ...
	I0729 11:53:03.930417   70231 start.go:255] writing updated cluster config ...
	I0729 11:53:03.930690   70231 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:03.984862   70231 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:53:03.986829   70231 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754486" cluster and "default" namespace by default
	I0729 11:53:03.068218   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:03.068464   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:13.068777   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:13.069011   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:23.088658   69419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.414400207s)
	I0729 11:53:23.088743   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:23.104735   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:53:23.115145   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:53:23.125890   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:53:23.125913   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:53:23.125969   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:53:23.136854   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:53:23.136914   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:53:23.148400   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:53:23.157595   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:53:23.157670   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:53:23.167281   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.177119   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:53:23.177176   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.187359   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:53:23.197033   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:53:23.197110   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:53:23.207490   69419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:53:23.254112   69419 kubeadm.go:310] W0729 11:53:23.231768    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.254983   69419 kubeadm.go:310] W0729 11:53:23.232599    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.383993   69419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:53:32.410305   69419 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 11:53:32.410378   69419 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:53:32.410483   69419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:53:32.410611   69419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:53:32.410758   69419 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 11:53:32.410840   69419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:53:32.412547   69419 out.go:204]   - Generating certificates and keys ...
	I0729 11:53:32.412651   69419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:53:32.412761   69419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:53:32.412879   69419 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:53:32.412973   69419 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:53:32.413101   69419 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:53:32.413176   69419 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:53:32.413228   69419 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:53:32.413279   69419 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:53:32.413346   69419 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:53:32.413427   69419 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:53:32.413482   69419 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:53:32.413577   69419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:53:32.413644   69419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:53:32.413717   69419 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:53:32.413795   69419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:53:32.413880   69419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:53:32.413970   69419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:53:32.414075   69419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:53:32.414167   69419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:53:32.415701   69419 out.go:204]   - Booting up control plane ...
	I0729 11:53:32.415817   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:53:32.415927   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:53:32.416034   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:53:32.416205   69419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:53:32.416312   69419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:53:32.416350   69419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:53:32.416466   69419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:53:32.416564   69419 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:53:32.416658   69419 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.786281ms
	I0729 11:53:32.416730   69419 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:53:32.416803   69419 kubeadm.go:310] [api-check] The API server is healthy after 5.501546935s
	I0729 11:53:32.416941   69419 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:53:32.417099   69419 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:53:32.417184   69419 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:53:32.417349   69419 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-297799 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:53:32.417434   69419 kubeadm.go:310] [bootstrap-token] Using token: 9fg92x.rq4eihzyqcflv0gj
	I0729 11:53:32.418783   69419 out.go:204]   - Configuring RBAC rules ...
	I0729 11:53:32.418899   69419 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:53:32.418969   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:53:32.419100   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:53:32.419239   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:53:32.419337   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:53:32.419423   69419 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:53:32.419544   69419 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:53:32.419594   69419 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:53:32.419633   69419 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:53:32.419639   69419 kubeadm.go:310] 
	I0729 11:53:32.419686   69419 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:53:32.419695   69419 kubeadm.go:310] 
	I0729 11:53:32.419756   69419 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:53:32.419762   69419 kubeadm.go:310] 
	I0729 11:53:32.419802   69419 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:53:32.419858   69419 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:53:32.419901   69419 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:53:32.419911   69419 kubeadm.go:310] 
	I0729 11:53:32.419965   69419 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:53:32.419971   69419 kubeadm.go:310] 
	I0729 11:53:32.420017   69419 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:53:32.420025   69419 kubeadm.go:310] 
	I0729 11:53:32.420072   69419 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:53:32.420137   69419 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:53:32.420200   69419 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:53:32.420205   69419 kubeadm.go:310] 
	I0729 11:53:32.420277   69419 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:53:32.420340   69419 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:53:32.420345   69419 kubeadm.go:310] 
	I0729 11:53:32.420416   69419 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420506   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:53:32.420531   69419 kubeadm.go:310] 	--control-plane 
	I0729 11:53:32.420544   69419 kubeadm.go:310] 
	I0729 11:53:32.420645   69419 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:53:32.420654   69419 kubeadm.go:310] 
	I0729 11:53:32.420765   69419 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420895   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:53:32.420911   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:53:32.420920   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:53:32.422438   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:53:32.423731   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:53:32.435581   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:53:32.457560   69419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:53:32.457665   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:32.457719   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-297799 minikube.k8s.io/updated_at=2024_07_29T11_53_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=no-preload-297799 minikube.k8s.io/primary=true
	I0729 11:53:32.486072   69419 ops.go:34] apiserver oom_adj: -16
	I0729 11:53:32.674003   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.174011   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.674077   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:34.174383   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.069886   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:33.070112   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:34.674510   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.174124   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.674135   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.174420   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.674370   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.787916   69419 kubeadm.go:1113] duration metric: took 4.330303492s to wait for elevateKubeSystemPrivileges
	I0729 11:53:36.787961   69419 kubeadm.go:394] duration metric: took 4m56.766239734s to StartCluster
	I0729 11:53:36.787983   69419 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.788071   69419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:53:36.790440   69419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.790747   69419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:53:36.790823   69419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:53:36.790914   69419 addons.go:69] Setting storage-provisioner=true in profile "no-preload-297799"
	I0729 11:53:36.790929   69419 addons.go:69] Setting default-storageclass=true in profile "no-preload-297799"
	I0729 11:53:36.790946   69419 addons.go:234] Setting addon storage-provisioner=true in "no-preload-297799"
	W0729 11:53:36.790956   69419 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:53:36.790970   69419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-297799"
	I0729 11:53:36.790963   69419 addons.go:69] Setting metrics-server=true in profile "no-preload-297799"
	I0729 11:53:36.790990   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791009   69419 addons.go:234] Setting addon metrics-server=true in "no-preload-297799"
	W0729 11:53:36.791023   69419 addons.go:243] addon metrics-server should already be in state true
	I0729 11:53:36.790938   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:53:36.791055   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791315   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791350   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791376   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791395   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791424   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791403   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.792400   69419 out.go:177] * Verifying Kubernetes components...
	I0729 11:53:36.793837   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:53:36.807811   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0729 11:53:36.807845   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0729 11:53:36.808292   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808347   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808844   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808863   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.808971   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808992   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.809204   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809364   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809708   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809727   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.809868   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809903   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.810196   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33633
	I0729 11:53:36.810602   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.811069   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.811085   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.811578   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.811789   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.815254   69419 addons.go:234] Setting addon default-storageclass=true in "no-preload-297799"
	W0729 11:53:36.815319   69419 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:53:36.815351   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.815722   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.815767   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.826661   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I0729 11:53:36.827259   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.827925   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.827947   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.828288   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.828475   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.829152   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0729 11:53:36.829483   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.829942   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.829954   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.830335   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.830448   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.830512   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.831779   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0729 11:53:36.832366   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.832499   69419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:53:36.832831   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.832843   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.833105   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.833659   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.833692   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.834047   69419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:36.834218   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:53:36.834243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.835105   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.837003   69419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:53:36.837668   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838105   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.838130   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838304   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:53:36.838322   69419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:53:36.838340   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.838347   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.838505   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.838661   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.838834   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.841306   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841724   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.841742   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841909   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.842081   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.842243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.842405   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.853959   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0729 11:53:36.854349   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.854825   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.854849   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.855184   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.855412   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.857073   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.857352   69419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:36.857363   69419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:53:36.857377   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.860376   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860804   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.860826   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860973   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.861121   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.861249   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.861352   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:37.000840   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:53:37.058535   69419 node_ready.go:35] waiting up to 6m0s for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069231   69419 node_ready.go:49] node "no-preload-297799" has status "Ready":"True"
	I0729 11:53:37.069260   69419 node_ready.go:38] duration metric: took 10.69136ms for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069272   69419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:37.080726   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:37.122837   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:37.154216   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:37.177797   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:53:37.177821   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:53:37.298520   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:37.298546   69419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:37.410911   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:37.410935   69419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:53:37.502799   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:38.337421   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.214547185s)
	I0729 11:53:38.337457   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.183203433s)
	I0729 11:53:38.337490   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337491   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337500   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337506   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337775   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337790   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337800   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337807   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337843   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.337844   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337865   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337873   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337880   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.338007   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338016   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338091   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338102   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338108   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.417894   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.417921   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.418225   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.418250   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.418272   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642279   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139432943s)
	I0729 11:53:38.642328   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642343   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642656   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642677   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642680   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642687   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642712   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642956   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642975   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642985   69419 addons.go:475] Verifying addon metrics-server=true in "no-preload-297799"
	I0729 11:53:38.642990   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.644958   69419 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 11:53:38.646417   69419 addons.go:510] duration metric: took 1.855596723s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 11:53:39.091531   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:41.587827   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.088096   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.586486   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.586510   69419 pod_ready.go:81] duration metric: took 7.505759998s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.586521   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591372   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.591394   69419 pod_ready.go:81] duration metric: took 4.865716ms for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591404   69419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596377   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.596401   69419 pod_ready.go:81] duration metric: took 4.988985ms for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596412   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603151   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.603176   69419 pod_ready.go:81] duration metric: took 6.75609ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603187   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609494   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.609514   69419 pod_ready.go:81] duration metric: took 6.319727ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609526   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984476   69419 pod_ready.go:92] pod "kube-proxy-blx4g" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.984505   69419 pod_ready.go:81] duration metric: took 374.971379ms for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984517   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385763   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:45.385792   69419 pod_ready.go:81] duration metric: took 401.266749ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385802   69419 pod_ready.go:38] duration metric: took 8.316518469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:45.385821   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:45.385887   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:45.404065   69419 api_server.go:72] duration metric: took 8.613282557s to wait for apiserver process to appear ...
	I0729 11:53:45.404093   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:45.404114   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:53:45.408027   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:53:45.408985   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:53:45.409011   69419 api_server.go:131] duration metric: took 4.91124ms to wait for apiserver health ...
	I0729 11:53:45.409020   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:45.587520   69419 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:45.587552   69419 system_pods.go:61] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.587556   69419 system_pods.go:61] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.587560   69419 system_pods.go:61] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.587563   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.587568   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.587571   69419 system_pods.go:61] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.587574   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.587580   69419 system_pods.go:61] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.587584   69419 system_pods.go:61] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.587590   69419 system_pods.go:74] duration metric: took 178.563924ms to wait for pod list to return data ...
	I0729 11:53:45.587596   69419 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:45.784611   69419 default_sa.go:45] found service account: "default"
	I0729 11:53:45.784640   69419 default_sa.go:55] duration metric: took 197.037896ms for default service account to be created ...
	I0729 11:53:45.784659   69419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:45.992937   69419 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:45.992973   69419 system_pods.go:89] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.992982   69419 system_pods.go:89] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.992990   69419 system_pods.go:89] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.992996   69419 system_pods.go:89] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.993003   69419 system_pods.go:89] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.993010   69419 system_pods.go:89] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.993017   69419 system_pods.go:89] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.993027   69419 system_pods.go:89] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.993037   69419 system_pods.go:89] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.993047   69419 system_pods.go:126] duration metric: took 208.382518ms to wait for k8s-apps to be running ...
	I0729 11:53:45.993059   69419 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:45.993109   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:46.012248   69419 system_svc.go:56] duration metric: took 19.180103ms WaitForService to wait for kubelet
	I0729 11:53:46.012284   69419 kubeadm.go:582] duration metric: took 9.221504322s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:46.012309   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:46.186674   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:46.186723   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:46.186736   69419 node_conditions.go:105] duration metric: took 174.422508ms to run NodePressure ...
	I0729 11:53:46.186747   69419 start.go:241] waiting for startup goroutines ...
	I0729 11:53:46.186753   69419 start.go:246] waiting for cluster config update ...
	I0729 11:53:46.186763   69419 start.go:255] writing updated cluster config ...
	I0729 11:53:46.187032   69419 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:46.236558   69419 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 11:53:46.239388   69419 out.go:177] * Done! kubectl is now configured to use "no-preload-297799" cluster and "default" namespace by default
	I0729 11:54:13.072567   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:13.072841   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:54:13.072861   70480 kubeadm.go:310] 
	I0729 11:54:13.072916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:54:13.072951   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:54:13.072959   70480 kubeadm.go:310] 
	I0729 11:54:13.072987   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:54:13.073016   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:54:13.073156   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:54:13.073181   70480 kubeadm.go:310] 
	I0729 11:54:13.073289   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:54:13.073328   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:54:13.073365   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:54:13.073373   70480 kubeadm.go:310] 
	I0729 11:54:13.073475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:54:13.073562   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:54:13.073572   70480 kubeadm.go:310] 
	I0729 11:54:13.073704   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:54:13.073835   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:54:13.073941   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:54:13.074115   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:54:13.074132   70480 kubeadm.go:310] 
	I0729 11:54:13.074287   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:54:13.074407   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:54:13.074523   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 11:54:13.074665   70480 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 11:54:13.074737   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:54:13.546511   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:54:13.564193   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:54:13.576259   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:54:13.576282   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:54:13.576325   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:54:13.586785   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:54:13.586846   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:54:13.597376   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:54:13.607259   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:54:13.607330   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:54:13.619236   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.630278   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:54:13.630341   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.640574   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:54:13.650526   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:54:13.650584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:54:13.660941   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:54:13.742767   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:54:13.742852   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:54:13.911296   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:54:13.911467   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:54:13.911607   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:54:14.123645   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:54:14.126464   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:54:14.126931   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:54:14.126995   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:54:14.127067   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:54:14.127146   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:54:14.127238   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:54:14.127328   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:54:14.127424   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:54:14.127506   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:54:14.129559   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:54:14.130588   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:54:14.130792   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:54:14.130931   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:54:14.373822   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:54:14.518174   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:54:14.850815   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:54:14.955918   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:54:14.979731   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:54:14.979884   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:54:14.979950   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:54:15.134480   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:54:15.137488   70480 out.go:204]   - Booting up control plane ...
	I0729 11:54:15.137635   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:54:15.150435   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:54:15.151842   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:54:15.152652   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:54:15.155022   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:54:55.157641   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:54:55.157771   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:55.158065   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:00.158857   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:00.159153   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:10.159857   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:30.160336   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:30.160554   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:56:10.159858   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159872   70480 kubeadm.go:310] 
	I0729 11:56:10.159916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:56:10.159968   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:56:10.159976   70480 kubeadm.go:310] 
	I0729 11:56:10.160004   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:56:10.160034   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:56:10.160140   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:56:10.160160   70480 kubeadm.go:310] 
	I0729 11:56:10.160286   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:56:10.160348   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:56:10.160378   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:56:10.160384   70480 kubeadm.go:310] 
	I0729 11:56:10.160475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:56:10.160547   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:56:10.160556   70480 kubeadm.go:310] 
	I0729 11:56:10.160652   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:56:10.160756   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:56:10.160878   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:56:10.160976   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:56:10.160985   70480 kubeadm.go:310] 
	I0729 11:56:10.162126   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:56:10.162224   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:56:10.162399   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 11:56:10.162415   70480 kubeadm.go:394] duration metric: took 7m59.00013135s to StartCluster
	I0729 11:56:10.162473   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:56:10.162606   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:56:10.208183   70480 cri.go:89] found id: ""
	I0729 11:56:10.208206   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.208214   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:56:10.208219   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:56:10.208275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:56:10.256265   70480 cri.go:89] found id: ""
	I0729 11:56:10.256293   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.256304   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:56:10.256445   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:56:10.256515   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:56:10.296625   70480 cri.go:89] found id: ""
	I0729 11:56:10.296668   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.296688   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:56:10.296704   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:56:10.296791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:56:10.333771   70480 cri.go:89] found id: ""
	I0729 11:56:10.333797   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.333824   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:56:10.333830   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:56:10.333887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:56:10.371746   70480 cri.go:89] found id: ""
	I0729 11:56:10.371772   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.371782   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:56:10.371789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:56:10.371850   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:56:10.408988   70480 cri.go:89] found id: ""
	I0729 11:56:10.409018   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.409028   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:56:10.409036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:56:10.409087   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:56:10.448706   70480 cri.go:89] found id: ""
	I0729 11:56:10.448731   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.448740   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:56:10.448749   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:56:10.448808   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:56:10.485549   70480 cri.go:89] found id: ""
	I0729 11:56:10.485577   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.485585   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:56:10.485594   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:56:10.485609   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:56:10.536953   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:56:10.536989   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:56:10.554870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:56:10.554908   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:56:10.675935   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:56:10.675966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:56:10.675983   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:56:10.792820   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:56:10.792858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 11:56:10.833493   70480 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 11:56:10.833537   70480 out.go:239] * 
	W0729 11:56:10.833616   70480 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.833644   70480 out.go:239] * 
	W0729 11:56:10.834607   70480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:56:10.838465   70480 out.go:177] 
	W0729 11:56:10.840213   70480 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.840257   70480 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 11:56:10.840282   70480 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 11:56:10.841869   70480 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.754814746Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254172754792943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abbe490c-c5cb-485b-be90-f773d0d28f67 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.755429167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3712697b-931e-48ba-87dd-2aeaf523e7e3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.755492891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3712697b-931e-48ba-87dd-2aeaf523e7e3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.755524639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3712697b-931e-48ba-87dd-2aeaf523e7e3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.793250621Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b337207-f5af-4973-a795-61b08c1c650c name=/runtime.v1.RuntimeService/Version
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.793340547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b337207-f5af-4973-a795-61b08c1c650c name=/runtime.v1.RuntimeService/Version
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.794642125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3554bf8-210e-457e-80e9-651f2ee22477 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.795151883Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254172795118752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3554bf8-210e-457e-80e9-651f2ee22477 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.795733787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e255b1ba-8967-4c4e-a4a1-a0af50abd5af name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.795804454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e255b1ba-8967-4c4e-a4a1-a0af50abd5af name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.795843686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e255b1ba-8967-4c4e-a4a1-a0af50abd5af name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.830024783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1af1bc2-1d04-4cf5-b5ff-a6690c15c4ff name=/runtime.v1.RuntimeService/Version
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.830105869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1af1bc2-1d04-4cf5-b5ff-a6690c15c4ff name=/runtime.v1.RuntimeService/Version
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.832711410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dcf29e2e-1fd2-40d2-8fb4-3beda76047e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.833245694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254172833217941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcf29e2e-1fd2-40d2-8fb4-3beda76047e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.834098295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26e2298a-4427-4d36-a848-17bdedde7b0f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.834151691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26e2298a-4427-4d36-a848-17bdedde7b0f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.834198005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=26e2298a-4427-4d36-a848-17bdedde7b0f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.870665696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e83b35ae-fa3b-42be-8589-d4ee143dcda3 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.870742472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e83b35ae-fa3b-42be-8589-d4ee143dcda3 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.872482115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29922366-287a-41e5-b6ac-9f3e65a63ae3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.872845916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254172872824839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29922366-287a-41e5-b6ac-9f3e65a63ae3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.873517891Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c84f35e6-af60-49f5-ad5f-1bd3f3295959 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.873589896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c84f35e6-af60-49f5-ad5f-1bd3f3295959 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:56:12 old-k8s-version-188043 crio[643]: time="2024-07-29 11:56:12.873630342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c84f35e6-af60-49f5-ad5f-1bd3f3295959 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051118] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040668] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.021934] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.587255] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.658853] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul29 11:48] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.065705] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.081948] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.207768] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.125104] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.281042] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +6.791991] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.065131] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.421882] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[ +12.167503] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 11:52] systemd-fstab-generator[5012]: Ignoring "noauto" option for root device
	[Jul29 11:54] systemd-fstab-generator[5288]: Ignoring "noauto" option for root device
	[  +0.063650] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 11:56:13 up 8 min,  0 users,  load average: 0.00, 0.08, 0.06
	Linux old-k8s-version-188043 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00021daa0, 0x0, 0x0)
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008b8540)
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]: goroutine 150 [runnable]:
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]: net._C2func_getaddrinfo(0xc000a080c0, 0x0, 0xc000b53230, 0xc0008b0730, 0x0, 0x0, 0x0)
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]:         _cgo_gotypes.go:94 +0x55
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]: net.cgoLookupIPCNAME.func1(0xc000a080c0, 0x20, 0x20, 0xc000b53230, 0xc0008b0730, 0xc0001e5680, 0xc0006726a0, 0x57a492)
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000b723c0, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]: net.cgoIPLookup(0xc0003941e0, 0x48ab5d6, 0x3, 0xc000b723c0, 0x1f)
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]: created by net.cgoLookupIP
	Jul 29 11:56:09 old-k8s-version-188043 kubelet[5468]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jul 29 11:56:10 old-k8s-version-188043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 29 11:56:10 old-k8s-version-188043 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 11:56:10 old-k8s-version-188043 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 11:56:10 old-k8s-version-188043 kubelet[5520]: I0729 11:56:10.685386    5520 server.go:416] Version: v1.20.0
	Jul 29 11:56:10 old-k8s-version-188043 kubelet[5520]: I0729 11:56:10.685663    5520 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 11:56:10 old-k8s-version-188043 kubelet[5520]: I0729 11:56:10.687901    5520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 11:56:10 old-k8s-version-188043 kubelet[5520]: W0729 11:56:10.689272    5520 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 11:56:10 old-k8s-version-188043 kubelet[5520]: I0729 11:56:10.689521    5520 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188043 -n old-k8s-version-188043
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 2 (246.560053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-188043" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (699.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 11:52:57.607560   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:53:03.511020   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-731235 -n embed-certs-731235
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 12:01:43.361238243 +0000 UTC m=+6081.444966734
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-731235 -n embed-certs-731235
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-731235 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-731235 logs -n 25: (2.24821499s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184479 sudo cat                              | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo find                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo crio                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184479                                       | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-574387 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | disable-driver-mounts-574387                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:41 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-297799             | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-297799                                   | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-731235            | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC | 29 Jul 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-754486  | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC | 29 Jul 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-297799                  | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-297799 --memory=2200                     | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC | 29 Jul 24 11:53 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-188043        | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-731235                 | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC | 29 Jul 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754486       | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:53 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-188043             | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:44:35.218497   70480 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:44:35.218591   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218595   70480 out.go:304] Setting ErrFile to fd 2...
	I0729 11:44:35.218599   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218821   70480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:44:35.219357   70480 out.go:298] Setting JSON to false
	I0729 11:44:35.220249   70480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5221,"bootTime":1722248254,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:44:35.220304   70480 start.go:139] virtualization: kvm guest
	I0729 11:44:35.222361   70480 out.go:177] * [old-k8s-version-188043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:44:35.223849   70480 notify.go:220] Checking for updates...
	I0729 11:44:35.223857   70480 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:44:35.225068   70480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:44:35.226167   70480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:44:35.227448   70480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:44:35.228516   70480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:44:35.229771   70480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:44:35.231218   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:44:35.231601   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.231634   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.246457   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0729 11:44:35.246861   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.247335   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.247371   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.247712   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.247899   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.249642   70480 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 11:44:35.250805   70480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:44:35.251114   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.251152   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.265900   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0729 11:44:35.266309   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.266767   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.266794   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.267099   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.267293   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.302182   70480 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:44:35.303575   70480 start.go:297] selected driver: kvm2
	I0729 11:44:35.303593   70480 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.303689   70480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:44:35.304334   70480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.304396   70480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:44:35.319674   70480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:44:35.320023   70480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:44:35.320054   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:44:35.320061   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:44:35.320111   70480 start.go:340] cluster config:
	{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.320210   70480 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.322089   70480 out.go:177] * Starting "old-k8s-version-188043" primary control-plane node in "old-k8s-version-188043" cluster
	I0729 11:44:38.643004   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:35.323173   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:44:35.323208   70480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 11:44:35.323221   70480 cache.go:56] Caching tarball of preloaded images
	I0729 11:44:35.323282   70480 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:44:35.323291   70480 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 11:44:35.323390   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:44:35.323557   70480 start.go:360] acquireMachinesLock for old-k8s-version-188043: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:44:41.714983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:47.794983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:50.867015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:56.946962   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:00.019017   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:06.099000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:09.171008   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:15.250989   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:18.322956   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:24.403015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:27.474951   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:33.554944   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:36.627002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:42.706993   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:45.779000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:51.858998   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:54.931013   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:01.011021   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:04.082938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:10.162988   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:13.235043   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:19.314994   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:22.386953   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:28.467078   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:31.539011   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:37.618990   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:40.690995   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:46.770999   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:49.842938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:55.923002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:58.994960   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:47:01.999190   69907 start.go:364] duration metric: took 3m42.920247555s to acquireMachinesLock for "embed-certs-731235"
	I0729 11:47:01.999237   69907 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:01.999244   69907 fix.go:54] fixHost starting: 
	I0729 11:47:01.999548   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:01.999574   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:02.014481   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I0729 11:47:02.014934   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:02.015374   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:47:02.015392   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:02.015726   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:02.015911   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:02.016062   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:47:02.017570   69907 fix.go:112] recreateIfNeeded on embed-certs-731235: state=Stopped err=<nil>
	I0729 11:47:02.017606   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	W0729 11:47:02.017758   69907 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:02.020459   69907 out.go:177] * Restarting existing kvm2 VM for "embed-certs-731235" ...
	I0729 11:47:02.021770   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Start
	I0729 11:47:02.021904   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring networks are active...
	I0729 11:47:02.022551   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network default is active
	I0729 11:47:02.022943   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network mk-embed-certs-731235 is active
	I0729 11:47:02.023347   69907 main.go:141] libmachine: (embed-certs-731235) Getting domain xml...
	I0729 11:47:02.023972   69907 main.go:141] libmachine: (embed-certs-731235) Creating domain...
	I0729 11:47:03.233906   69907 main.go:141] libmachine: (embed-certs-731235) Waiting to get IP...
	I0729 11:47:03.234807   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.235200   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.235266   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.235191   70997 retry.go:31] will retry after 267.737911ms: waiting for machine to come up
	I0729 11:47:03.504861   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.505460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.505485   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.505418   70997 retry.go:31] will retry after 246.310337ms: waiting for machine to come up
	I0729 11:47:03.753068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.753558   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.753587   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.753520   70997 retry.go:31] will retry after 374.497339ms: waiting for machine to come up
	I0729 11:47:01.996514   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:01.996575   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.996873   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:47:01.996897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.997094   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:47:01.999070   69419 machine.go:97] duration metric: took 4m37.426222817s to provisionDockerMachine
	I0729 11:47:01.999113   69419 fix.go:56] duration metric: took 4m37.448019985s for fixHost
	I0729 11:47:01.999122   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 4m37.448042995s
	W0729 11:47:01.999140   69419 start.go:714] error starting host: provision: host is not running
	W0729 11:47:01.999247   69419 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 11:47:01.999257   69419 start.go:729] Will try again in 5 seconds ...
	I0729 11:47:04.130170   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.130603   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.130625   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.130557   70997 retry.go:31] will retry after 500.810762ms: waiting for machine to come up
	I0729 11:47:04.632773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.633142   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.633196   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.633094   70997 retry.go:31] will retry after 499.805121ms: waiting for machine to come up
	I0729 11:47:05.135101   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.135685   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.135714   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.135610   70997 retry.go:31] will retry after 713.805425ms: waiting for machine to come up
	I0729 11:47:05.850525   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.850950   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.850979   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.850918   70997 retry.go:31] will retry after 940.40593ms: waiting for machine to come up
	I0729 11:47:06.792982   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:06.793406   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:06.793433   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:06.793344   70997 retry.go:31] will retry after 1.216752167s: waiting for machine to come up
	I0729 11:47:08.012264   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:08.012748   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:08.012773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:08.012692   70997 retry.go:31] will retry after 1.729849311s: waiting for machine to come up
	I0729 11:47:07.000812   69419 start.go:360] acquireMachinesLock for no-preload-297799: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:47:09.743735   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:09.744125   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:09.744144   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:09.744101   70997 retry.go:31] will retry after 2.251271574s: waiting for machine to come up
	I0729 11:47:11.998663   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:11.999213   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:11.999255   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:11.999163   70997 retry.go:31] will retry after 2.400718693s: waiting for machine to come up
	I0729 11:47:14.401005   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:14.401419   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:14.401442   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:14.401352   70997 retry.go:31] will retry after 3.073847413s: waiting for machine to come up
	I0729 11:47:17.477026   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:17.477424   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:17.477460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:17.477352   70997 retry.go:31] will retry after 3.28522497s: waiting for machine to come up
	I0729 11:47:22.076091   70231 start.go:364] duration metric: took 3m11.794715554s to acquireMachinesLock for "default-k8s-diff-port-754486"
	I0729 11:47:22.076162   70231 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:22.076177   70231 fix.go:54] fixHost starting: 
	I0729 11:47:22.076605   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:22.076644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:22.096370   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45159
	I0729 11:47:22.096731   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:22.097267   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:47:22.097296   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:22.097603   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:22.097812   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:22.097983   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:47:22.099583   70231 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754486: state=Stopped err=<nil>
	I0729 11:47:22.099607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	W0729 11:47:22.099762   70231 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:22.101982   70231 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754486" ...
	I0729 11:47:20.766989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767519   69907 main.go:141] libmachine: (embed-certs-731235) Found IP for machine: 192.168.61.202
	I0729 11:47:20.767544   69907 main.go:141] libmachine: (embed-certs-731235) Reserving static IP address...
	I0729 11:47:20.767560   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has current primary IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767996   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.768025   69907 main.go:141] libmachine: (embed-certs-731235) DBG | skip adding static IP to network mk-embed-certs-731235 - found existing host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"}
	I0729 11:47:20.768043   69907 main.go:141] libmachine: (embed-certs-731235) Reserved static IP address: 192.168.61.202
	I0729 11:47:20.768060   69907 main.go:141] libmachine: (embed-certs-731235) Waiting for SSH to be available...
	I0729 11:47:20.768068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Getting to WaitForSSH function...
	I0729 11:47:20.770325   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770639   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.770667   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770863   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH client type: external
	I0729 11:47:20.770894   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa (-rw-------)
	I0729 11:47:20.770927   69907 main.go:141] libmachine: (embed-certs-731235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:20.770943   69907 main.go:141] libmachine: (embed-certs-731235) DBG | About to run SSH command:
	I0729 11:47:20.770960   69907 main.go:141] libmachine: (embed-certs-731235) DBG | exit 0
	I0729 11:47:20.895074   69907 main.go:141] libmachine: (embed-certs-731235) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:20.895473   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetConfigRaw
	I0729 11:47:20.896121   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:20.898342   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.898673   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.898717   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.899017   69907 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/config.json ...
	I0729 11:47:20.899239   69907 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:20.899262   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:20.899464   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:20.901688   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902056   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.902099   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902249   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:20.902412   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902579   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902715   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:20.902857   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:20.903102   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:20.903118   69907 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:21.007368   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:21.007403   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007682   69907 buildroot.go:166] provisioning hostname "embed-certs-731235"
	I0729 11:47:21.007708   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007928   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.010883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011268   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.011308   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011465   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.011634   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011779   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011950   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.012121   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.012314   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.012334   69907 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-731235 && echo "embed-certs-731235" | sudo tee /etc/hostname
	I0729 11:47:21.129877   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-731235
	
	I0729 11:47:21.129907   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.133055   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133390   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.133411   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133614   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.133806   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.133977   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.134156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.134317   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.134480   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.134495   69907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-731235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-731235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-731235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:21.247997   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:21.248029   69907 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:21.248056   69907 buildroot.go:174] setting up certificates
	I0729 11:47:21.248067   69907 provision.go:84] configureAuth start
	I0729 11:47:21.248075   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.248361   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:21.251377   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251711   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.251738   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251908   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.254107   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254493   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.254521   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254721   69907 provision.go:143] copyHostCerts
	I0729 11:47:21.254788   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:21.254801   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:21.254896   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:21.255008   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:21.255019   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:21.255058   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:21.255138   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:21.255148   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:21.255183   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:21.255257   69907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.embed-certs-731235 san=[127.0.0.1 192.168.61.202 embed-certs-731235 localhost minikube]
	I0729 11:47:21.398780   69907 provision.go:177] copyRemoteCerts
	I0729 11:47:21.398833   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:21.398858   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.401840   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402259   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.402282   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402483   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.402661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.402992   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.403139   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.484883   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 11:47:21.509042   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:47:21.532327   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:21.556013   69907 provision.go:87] duration metric: took 307.934726ms to configureAuth
	I0729 11:47:21.556040   69907 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:21.556258   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:21.556337   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.558962   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559347   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.559372   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559518   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.559699   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.559861   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.560004   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.560157   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.560337   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.560356   69907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:21.834240   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:21.834270   69907 machine.go:97] duration metric: took 935.015622ms to provisionDockerMachine
	I0729 11:47:21.834284   69907 start.go:293] postStartSetup for "embed-certs-731235" (driver="kvm2")
	I0729 11:47:21.834299   69907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:21.834325   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:21.834638   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:21.834671   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.837313   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837712   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.837751   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837857   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.838022   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.838229   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.838357   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.922275   69907 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:21.926932   69907 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:21.926955   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:21.927027   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:21.927136   69907 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:21.927219   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:21.937122   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:21.964493   69907 start.go:296] duration metric: took 130.192874ms for postStartSetup
	I0729 11:47:21.964533   69907 fix.go:56] duration metric: took 19.965288806s for fixHost
	I0729 11:47:21.964554   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.967318   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967652   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.967682   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967850   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.968066   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968222   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968356   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.968509   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.968717   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.968731   69907 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:22.075873   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253642.050121254
	
	I0729 11:47:22.075893   69907 fix.go:216] guest clock: 1722253642.050121254
	I0729 11:47:22.075900   69907 fix.go:229] Guest: 2024-07-29 11:47:22.050121254 +0000 UTC Remote: 2024-07-29 11:47:21.964537244 +0000 UTC m=+243.027106048 (delta=85.58401ms)
	I0729 11:47:22.075927   69907 fix.go:200] guest clock delta is within tolerance: 85.58401ms
	I0729 11:47:22.075933   69907 start.go:83] releasing machines lock for "embed-certs-731235", held for 20.076714897s
	I0729 11:47:22.075958   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.076265   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:22.079236   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079566   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.079604   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079771   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080311   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080491   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080573   69907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:22.080644   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.080719   69907 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:22.080743   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.083401   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083438   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083743   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083904   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083917   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084061   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084378   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084389   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084565   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084573   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.084691   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.188025   69907 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:22.194866   69907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:22.344382   69907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:22.350719   69907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:22.350809   69907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:22.371783   69907 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:22.371814   69907 start.go:495] detecting cgroup driver to use...
	I0729 11:47:22.371874   69907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:22.387899   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:22.401722   69907 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:22.401790   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:22.415295   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:22.429209   69907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:22.541230   69907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:22.705734   69907 docker.go:233] disabling docker service ...
	I0729 11:47:22.705811   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:22.720716   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:22.736719   69907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:22.865574   69907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:22.994470   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:23.018115   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:23.037125   69907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:23.037210   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.048702   69907 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:23.048768   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.061785   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.074734   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.087639   69907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:23.101010   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.113893   69907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.134264   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.147422   69907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:23.158168   69907 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:23.158220   69907 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:23.175245   69907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:23.190456   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:23.314426   69907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:23.459513   69907 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:23.459584   69907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:23.464829   69907 start.go:563] Will wait 60s for crictl version
	I0729 11:47:23.464899   69907 ssh_runner.go:195] Run: which crictl
	I0729 11:47:23.468768   69907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:23.508694   69907 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:23.508811   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.537048   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.569189   69907 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:23.570566   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:23.573554   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.573918   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:23.573946   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.574198   69907 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:23.578543   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:23.591660   69907 kubeadm.go:883] updating cluster {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:23.591803   69907 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:23.591862   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:23.629355   69907 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:23.629423   69907 ssh_runner.go:195] Run: which lz4
	I0729 11:47:23.633713   69907 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:23.638463   69907 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:23.638491   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:22.103288   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Start
	I0729 11:47:22.103502   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring networks are active...
	I0729 11:47:22.104291   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network default is active
	I0729 11:47:22.104576   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network mk-default-k8s-diff-port-754486 is active
	I0729 11:47:22.105037   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Getting domain xml...
	I0729 11:47:22.105746   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Creating domain...
	I0729 11:47:23.370011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting to get IP...
	I0729 11:47:23.370892   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371318   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.371249   71147 retry.go:31] will retry after 303.24713ms: waiting for machine to come up
	I0729 11:47:23.675985   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676540   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.676486   71147 retry.go:31] will retry after 332.87749ms: waiting for machine to come up
	I0729 11:47:24.010822   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011360   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011388   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.011312   71147 retry.go:31] will retry after 465.260924ms: waiting for machine to come up
	I0729 11:47:24.477939   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478471   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478517   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.478431   71147 retry.go:31] will retry after 501.294487ms: waiting for machine to come up
	I0729 11:47:24.981168   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981736   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.981647   71147 retry.go:31] will retry after 522.082731ms: waiting for machine to come up
	I0729 11:47:25.165725   69907 crio.go:462] duration metric: took 1.532044107s to copy over tarball
	I0729 11:47:25.165811   69907 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:27.422770   69907 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256906507s)
	I0729 11:47:27.422807   69907 crio.go:469] duration metric: took 2.257052359s to extract the tarball
	I0729 11:47:27.422817   69907 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:27.460807   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:27.509129   69907 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:27.509157   69907 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:27.509166   69907 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.30.3 crio true true} ...
	I0729 11:47:27.509281   69907 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-731235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:27.509347   69907 ssh_runner.go:195] Run: crio config
	I0729 11:47:27.560098   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:27.560121   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:27.560133   69907 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:27.560152   69907 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-731235 NodeName:embed-certs-731235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:27.560290   69907 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-731235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:27.560345   69907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:27.570464   69907 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:27.570555   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:27.580535   69907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 11:47:27.598211   69907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:27.615318   69907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 11:47:27.632974   69907 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:27.636858   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:27.649277   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:27.763642   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:27.781529   69907 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235 for IP: 192.168.61.202
	I0729 11:47:27.781556   69907 certs.go:194] generating shared ca certs ...
	I0729 11:47:27.781577   69907 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:27.781758   69907 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:27.781812   69907 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:27.781825   69907 certs.go:256] generating profile certs ...
	I0729 11:47:27.781950   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/client.key
	I0729 11:47:27.782036   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key.6ae4b4bc
	I0729 11:47:27.782091   69907 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key
	I0729 11:47:27.782234   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:27.782278   69907 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:27.782291   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:27.782323   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:27.782358   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:27.782388   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:27.782440   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:27.783361   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:27.813522   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:27.841190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:27.877646   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:27.919310   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 11:47:27.952080   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:47:27.985958   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:28.010190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:28.034756   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:28.059541   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:28.083582   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:28.113030   69907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:28.133424   69907 ssh_runner.go:195] Run: openssl version
	I0729 11:47:28.139250   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:28.150142   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154885   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154934   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.160995   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:28.172031   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:28.184289   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189071   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189132   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.194963   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:28.205555   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:28.216328   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221023   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221091   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.227053   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:28.238044   69907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:28.242748   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:28.248989   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:28.255165   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:28.261178   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:28.266997   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:28.272966   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:28.278994   69907 kubeadm.go:392] StartCluster: {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:28.279100   69907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:28.279142   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.317620   69907 cri.go:89] found id: ""
	I0729 11:47:28.317701   69907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:28.328260   69907 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:28.328285   69907 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:28.328365   69907 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:28.338356   69907 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:28.339293   69907 kubeconfig.go:125] found "embed-certs-731235" server: "https://192.168.61.202:8443"
	I0729 11:47:28.341224   69907 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:28.351166   69907 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0729 11:47:28.351203   69907 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:28.351215   69907 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:28.351271   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.393883   69907 cri.go:89] found id: ""
	I0729 11:47:28.393986   69907 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:28.411298   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:28.421328   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:28.421362   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:28.421406   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:47:28.430665   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:28.430746   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:28.440426   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:47:28.450406   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:28.450466   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:28.460200   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.469699   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:28.469771   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.479855   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:47:28.489251   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:28.489346   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:28.499019   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:28.508770   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:28.644277   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:25.505636   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506255   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:25.506195   71147 retry.go:31] will retry after 748.410801ms: waiting for machine to come up
	I0729 11:47:26.255894   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256293   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:26.256252   71147 retry.go:31] will retry after 1.1735659s: waiting for machine to come up
	I0729 11:47:27.430990   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431494   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:27.431400   71147 retry.go:31] will retry after 1.448031075s: waiting for machine to come up
	I0729 11:47:28.880998   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881483   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:28.881413   71147 retry.go:31] will retry after 1.123855306s: waiting for machine to come up
	I0729 11:47:30.006750   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007231   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007261   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:30.007176   71147 retry.go:31] will retry after 2.180202817s: waiting for machine to come up
	I0729 11:47:30.200484   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.556171661s)
	I0729 11:47:30.200515   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.427523   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.499256   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.603274   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:30.603360   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.104293   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.603524   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.621119   69907 api_server.go:72] duration metric: took 1.01784341s to wait for apiserver process to appear ...
	I0729 11:47:31.621152   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:31.621173   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:31.621755   69907 api_server.go:269] stopped: https://192.168.61.202:8443/healthz: Get "https://192.168.61.202:8443/healthz": dial tcp 192.168.61.202:8443: connect: connection refused
	I0729 11:47:32.121931   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:32.188652   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189149   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189200   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:32.189120   71147 retry.go:31] will retry after 2.231222575s: waiting for machine to come up
	I0729 11:47:34.421672   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422102   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422130   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:34.422062   71147 retry.go:31] will retry after 2.830311758s: waiting for machine to come up
	I0729 11:47:34.187391   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.187427   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.187450   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.199953   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.199994   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.621483   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.639389   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:34.639423   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.121653   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.130808   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.130843   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.621391   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.626072   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.626116   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.122245   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.126823   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.126851   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.621364   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.625781   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.625810   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.121848   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.126505   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:37.126537   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.622175   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.628241   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:47:37.634638   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:37.634668   69907 api_server.go:131] duration metric: took 6.013509305s to wait for apiserver health ...
	I0729 11:47:37.634677   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:37.634684   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:37.636740   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:37.638144   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:37.649816   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:37.670562   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:37.680377   69907 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:37.680408   69907 system_pods.go:61] "coredns-7db6d8ff4d-kwx89" [f2a3fdcb-2794-470e-a1b4-fe264fb5613a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:37.680414   69907 system_pods.go:61] "etcd-embed-certs-731235" [a99bcf99-7242-4383-aa2d-597e817004db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:37.680421   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [302c4cda-07d4-46ec-af59-3339a2b91049] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:37.680426   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [dae9ef32-63c1-4865-9569-ea1f11c9526c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:37.680430   69907 system_pods.go:61] "kube-proxy-hw66r" [97610503-7ca0-4d0c-8d73-249f2a48ef9a] Running
	I0729 11:47:37.680436   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [144902be-bea5-493c-986d-3834c22d82d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:37.680445   69907 system_pods.go:61] "metrics-server-569cc877fc-vqgtm" [75d59d71-3fb3-4383-bd90-3362f6b40694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:37.680449   69907 system_pods.go:61] "storage-provisioner" [24f74df4-0657-481b-9af8-f8b5c94684ea] Running
	I0729 11:47:37.680454   69907 system_pods.go:74] duration metric: took 9.870611ms to wait for pod list to return data ...
	I0729 11:47:37.680460   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:37.683573   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:37.683595   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:37.683607   69907 node_conditions.go:105] duration metric: took 3.142611ms to run NodePressure ...
	I0729 11:47:37.683626   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:37.964162   69907 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968288   69907 kubeadm.go:739] kubelet initialised
	I0729 11:47:37.968308   69907 kubeadm.go:740] duration metric: took 4.123333ms waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968316   69907 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:37.972978   69907 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.977070   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977088   69907 pod_ready.go:81] duration metric: took 4.090197ms for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.977097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977102   69907 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.981499   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981535   69907 pod_ready.go:81] duration metric: took 4.424741ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.981543   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981550   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.986064   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986084   69907 pod_ready.go:81] duration metric: took 4.52445ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.986097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986103   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.254312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254680   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254757   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:37.254658   71147 retry.go:31] will retry after 3.980350875s: waiting for machine to come up
	I0729 11:47:42.625107   70480 start.go:364] duration metric: took 3m7.301517115s to acquireMachinesLock for "old-k8s-version-188043"
	I0729 11:47:42.625180   70480 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:42.625189   70480 fix.go:54] fixHost starting: 
	I0729 11:47:42.625660   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:42.625704   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:42.644136   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
	I0729 11:47:42.644661   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:42.645210   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:47:42.645242   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:42.645570   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:42.645748   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:47:42.645875   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetState
	I0729 11:47:42.647762   70480 fix.go:112] recreateIfNeeded on old-k8s-version-188043: state=Stopped err=<nil>
	I0729 11:47:42.647808   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	W0729 11:47:42.647970   70480 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:42.649883   70480 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-188043" ...
	I0729 11:47:39.992010   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:41.992091   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:43.494150   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.494177   69907 pod_ready.go:81] duration metric: took 5.508061336s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.494186   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500158   69907 pod_ready.go:92] pod "kube-proxy-hw66r" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.500186   69907 pod_ready.go:81] duration metric: took 5.992092ms for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500198   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:41.239616   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240073   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Found IP for machine: 192.168.50.111
	I0729 11:47:41.240103   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has current primary IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240110   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserving static IP address...
	I0729 11:47:41.240474   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.240501   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserved static IP address: 192.168.50.111
	I0729 11:47:41.240529   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | skip adding static IP to network mk-default-k8s-diff-port-754486 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"}
	I0729 11:47:41.240549   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Getting to WaitForSSH function...
	I0729 11:47:41.240567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for SSH to be available...
	I0729 11:47:41.242523   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.242938   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.242970   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.243112   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH client type: external
	I0729 11:47:41.243140   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa (-rw-------)
	I0729 11:47:41.243171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:41.243185   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | About to run SSH command:
	I0729 11:47:41.243198   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | exit 0
	I0729 11:47:41.366827   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:41.367268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetConfigRaw
	I0729 11:47:41.367885   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.370241   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370574   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.370605   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370867   70231 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/config.json ...
	I0729 11:47:41.371157   70231 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:41.371184   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:41.371408   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.374380   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374770   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.374805   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374920   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.375098   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375245   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375362   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.375555   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.375784   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.375801   70231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:41.479220   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:41.479262   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479528   70231 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754486"
	I0729 11:47:41.479555   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479744   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.482542   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.482869   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.482903   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.483074   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.483282   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483442   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483611   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.483828   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.484029   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.484048   70231 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754486 && echo "default-k8s-diff-port-754486" | sudo tee /etc/hostname
	I0729 11:47:41.605605   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754486
	
	I0729 11:47:41.605639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.608313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.608698   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608910   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.609126   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609498   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.609650   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.609845   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.609862   70231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754486/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:41.724183   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:41.724209   70231 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:41.724237   70231 buildroot.go:174] setting up certificates
	I0729 11:47:41.724246   70231 provision.go:84] configureAuth start
	I0729 11:47:41.724256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.724530   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.727462   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.727826   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.727858   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.728009   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.730256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.730683   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730768   70231 provision.go:143] copyHostCerts
	I0729 11:47:41.730822   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:41.730835   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:41.730904   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:41.731016   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:41.731026   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:41.731047   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:41.731151   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:41.731161   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:41.731179   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:41.731238   70231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754486 san=[127.0.0.1 192.168.50.111 default-k8s-diff-port-754486 localhost minikube]
	I0729 11:47:41.930044   70231 provision.go:177] copyRemoteCerts
	I0729 11:47:41.930097   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:41.930127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.932832   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933158   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.933186   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933378   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.933565   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.933723   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.933848   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.016885   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:42.042982   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 11:47:42.067813   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:47:42.092573   70231 provision.go:87] duration metric: took 368.315812ms to configureAuth
	I0729 11:47:42.092601   70231 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:42.092761   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:42.092829   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.095761   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096177   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.096223   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096349   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.096571   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096751   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096891   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.097056   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.097234   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.097251   70231 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:42.378448   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:42.378478   70231 machine.go:97] duration metric: took 1.007302295s to provisionDockerMachine
	I0729 11:47:42.378495   70231 start.go:293] postStartSetup for "default-k8s-diff-port-754486" (driver="kvm2")
	I0729 11:47:42.378511   70231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:42.378541   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.378917   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:42.378950   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.382127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382539   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.382567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382759   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.382958   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.383171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.383297   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.467524   70231 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:42.471793   70231 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:42.471815   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:42.471873   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:42.471948   70231 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:42.472033   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:42.482148   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:42.507312   70231 start.go:296] duration metric: took 128.801138ms for postStartSetup
	I0729 11:47:42.507358   70231 fix.go:56] duration metric: took 20.43118839s for fixHost
	I0729 11:47:42.507384   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.510309   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510737   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.510769   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510948   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.511195   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511373   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511537   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.511694   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.511844   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.511853   70231 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:42.624913   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253662.599486483
	
	I0729 11:47:42.624946   70231 fix.go:216] guest clock: 1722253662.599486483
	I0729 11:47:42.624960   70231 fix.go:229] Guest: 2024-07-29 11:47:42.599486483 +0000 UTC Remote: 2024-07-29 11:47:42.507363501 +0000 UTC m=+212.369750509 (delta=92.122982ms)
	I0729 11:47:42.624988   70231 fix.go:200] guest clock delta is within tolerance: 92.122982ms
	I0729 11:47:42.625005   70231 start.go:83] releasing machines lock for "default-k8s-diff-port-754486", held for 20.548870778s
	I0729 11:47:42.625050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.625322   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:42.628299   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.628799   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.628834   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.629011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629659   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629860   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629950   70231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:42.629997   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.630087   70231 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:42.630106   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.633122   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633432   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633464   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.633504   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633890   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.633973   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.634044   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.634088   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.634312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.634387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634489   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.634906   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.635039   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.746128   70231 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:42.754711   70231 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:42.906989   70231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:42.913975   70231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:42.914035   70231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:42.931503   70231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:42.931535   70231 start.go:495] detecting cgroup driver to use...
	I0729 11:47:42.931591   70231 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:42.949385   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:42.965940   70231 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:42.965989   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:42.982952   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:43.000214   70231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:43.123333   70231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:43.266557   70231 docker.go:233] disabling docker service ...
	I0729 11:47:43.266637   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:43.282521   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:43.300091   70231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:43.440721   70231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:43.577985   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:43.598070   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:43.620282   70231 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:43.620343   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.633918   70231 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:43.634064   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.644931   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.660559   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.676307   70231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:43.687970   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.699659   70231 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.718571   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.729820   70231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:43.739921   70231 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:43.740010   70231 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:43.755562   70231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:43.768161   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:43.899531   70231 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:44.057564   70231 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:44.057649   70231 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:44.062669   70231 start.go:563] Will wait 60s for crictl version
	I0729 11:47:44.062751   70231 ssh_runner.go:195] Run: which crictl
	I0729 11:47:44.066815   70231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:44.104368   70231 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:44.104469   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.133158   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.165813   70231 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:44.167192   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:44.170230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170633   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:44.170664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170908   70231 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:44.175609   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:44.188628   70231 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:44.188748   70231 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:44.188811   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:44.229180   70231 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:44.229255   70231 ssh_runner.go:195] Run: which lz4
	I0729 11:47:44.233985   70231 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:44.238236   70231 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:44.238276   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:42.651248   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .Start
	I0729 11:47:42.651431   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring networks are active...
	I0729 11:47:42.652248   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network default is active
	I0729 11:47:42.652632   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network mk-old-k8s-version-188043 is active
	I0729 11:47:42.653108   70480 main.go:141] libmachine: (old-k8s-version-188043) Getting domain xml...
	I0729 11:47:42.653902   70480 main.go:141] libmachine: (old-k8s-version-188043) Creating domain...
	I0729 11:47:43.961872   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting to get IP...
	I0729 11:47:43.962928   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:43.963321   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:43.963424   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:43.963311   71329 retry.go:31] will retry after 266.698669ms: waiting for machine to come up
	I0729 11:47:44.231914   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.232480   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.232507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.232428   71329 retry.go:31] will retry after 363.884342ms: waiting for machine to come up
	I0729 11:47:44.598046   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.598507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.598530   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.598465   71329 retry.go:31] will retry after 486.214401ms: waiting for machine to come up
	I0729 11:47:45.085968   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.086466   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.086511   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.086440   71329 retry.go:31] will retry after 451.181437ms: waiting for machine to come up
	I0729 11:47:44.508165   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:44.508190   69907 pod_ready.go:81] duration metric: took 1.007982605s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:44.508199   69907 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:46.515466   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:48.515797   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:45.761961   70231 crio.go:462] duration metric: took 1.528001524s to copy over tarball
	I0729 11:47:45.762103   70231 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:48.135637   70231 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.373497372s)
	I0729 11:47:48.135673   70231 crio.go:469] duration metric: took 2.373677697s to extract the tarball
	I0729 11:47:48.135683   70231 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:48.173007   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:48.222120   70231 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:48.222146   70231 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:48.222156   70231 kubeadm.go:934] updating node { 192.168.50.111 8444 v1.30.3 crio true true} ...
	I0729 11:47:48.222294   70231 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:48.222372   70231 ssh_runner.go:195] Run: crio config
	I0729 11:47:48.269094   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:48.269122   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:48.269149   70231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:48.269175   70231 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.111 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754486 NodeName:default-k8s-diff-port-754486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:48.269394   70231 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754486"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:48.269469   70231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:48.282748   70231 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:48.282830   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:48.292857   70231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 11:47:48.312165   70231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:48.332206   70231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 11:47:48.350385   70231 ssh_runner.go:195] Run: grep 192.168.50.111	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:48.354603   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:48.368166   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:48.505072   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:48.525399   70231 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486 for IP: 192.168.50.111
	I0729 11:47:48.525436   70231 certs.go:194] generating shared ca certs ...
	I0729 11:47:48.525457   70231 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:48.525622   70231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:48.525678   70231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:48.525691   70231 certs.go:256] generating profile certs ...
	I0729 11:47:48.525783   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/client.key
	I0729 11:47:48.525863   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key.0ed2faa3
	I0729 11:47:48.525927   70231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key
	I0729 11:47:48.526076   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:48.526124   70231 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:48.526138   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:48.526169   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:48.526211   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:48.526241   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:48.526289   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:48.527026   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:48.567953   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:48.605538   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:48.639615   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:48.678439   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 11:47:48.722664   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:47:48.757436   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:48.797241   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:48.825666   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:48.856344   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:48.882046   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:48.909963   70231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:48.928513   70231 ssh_runner.go:195] Run: openssl version
	I0729 11:47:48.934467   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:48.945606   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950533   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950585   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.957222   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:48.969043   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:48.981101   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986095   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986161   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.992153   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:49.004358   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:49.016204   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021070   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021131   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.027503   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:49.038545   70231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:49.043602   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:49.050327   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:49.056648   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:49.063624   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:49.071491   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:49.080125   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:49.086622   70231 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:49.086771   70231 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:49.086845   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.131483   70231 cri.go:89] found id: ""
	I0729 11:47:49.131580   70231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:49.143222   70231 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:49.143246   70231 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:49.143296   70231 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:49.155447   70231 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:49.156410   70231 kubeconfig.go:125] found "default-k8s-diff-port-754486" server: "https://192.168.50.111:8444"
	I0729 11:47:49.158477   70231 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:49.171515   70231 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.111
	I0729 11:47:49.171546   70231 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:49.171558   70231 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:49.171614   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.218584   70231 cri.go:89] found id: ""
	I0729 11:47:49.218656   70231 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:49.237934   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:49.249188   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:49.249213   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:49.249276   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:47:49.260033   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:49.260100   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:49.270588   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:47:49.280326   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:49.280422   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:49.291754   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.301918   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:49.302005   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.312861   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:47:49.323545   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:49.323614   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:49.335556   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:49.347161   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:49.467792   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:45.538886   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.539448   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.539474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.539387   71329 retry.go:31] will retry after 730.083235ms: waiting for machine to come up
	I0729 11:47:46.270923   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.271428   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.271457   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.271383   71329 retry.go:31] will retry after 699.351116ms: waiting for machine to come up
	I0729 11:47:46.973033   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.973661   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.973689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.973606   71329 retry.go:31] will retry after 804.992714ms: waiting for machine to come up
	I0729 11:47:47.780217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:47.780676   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:47.780727   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:47.780651   71329 retry.go:31] will retry after 1.242092613s: waiting for machine to come up
	I0729 11:47:49.024835   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:49.025362   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:49.025384   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:49.025314   71329 retry.go:31] will retry after 1.505477936s: waiting for machine to come up
	I0729 11:47:51.014115   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:53.015922   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:50.213363   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.427510   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.489221   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.574558   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:50.574648   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.075420   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.574892   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.612604   70231 api_server.go:72] duration metric: took 1.038045496s to wait for apiserver process to appear ...
	I0729 11:47:51.612635   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:51.612656   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:51.613131   70231 api_server.go:269] stopped: https://192.168.50.111:8444/healthz: Get "https://192.168.50.111:8444/healthz": dial tcp 192.168.50.111:8444: connect: connection refused
	I0729 11:47:52.113045   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.008828   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.008861   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.008877   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.080000   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.080047   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.113269   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.123263   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.123301   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:50.532816   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:50.533325   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:50.533354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:50.533285   71329 retry.go:31] will retry after 1.877421564s: waiting for machine to come up
	I0729 11:47:52.413018   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:52.413474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:52.413500   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:52.413405   71329 retry.go:31] will retry after 2.017909532s: waiting for machine to come up
	I0729 11:47:54.432996   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:54.433478   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:54.433506   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:54.433444   71329 retry.go:31] will retry after 2.469357423s: waiting for machine to come up
	I0729 11:47:55.612793   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.617264   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.617299   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.112811   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.119382   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:56.119410   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.612944   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.617383   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:47:56.623760   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:56.623786   70231 api_server.go:131] duration metric: took 5.011145377s to wait for apiserver health ...
	I0729 11:47:56.623795   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:56.623801   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:56.625608   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:55.018201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:57.514432   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.626901   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:56.638585   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:56.661631   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:56.671881   70231 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:56.671908   70231 system_pods.go:61] "coredns-7db6d8ff4d-d4frq" [e495bc30-3c10-4d07-b488-4dbe9b0bfb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:56.671916   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [de3378a8-9a12-4c4b-a6e6-61b19950d5a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:56.671924   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [36c2cd1b-d9de-463e-b343-661d5f14f4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:56.671934   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [6239a1ee-9f7d-4d9b-9d70-5659c7b08fbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:56.671941   70231 system_pods.go:61] "kube-proxy-4bbt5" [4e672275-1afe-4f11-80e2-62aa220e9994] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:47:56.671947   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [81b7d1ed-0163-43fb-8111-048d48efa13c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:56.671954   70231 system_pods.go:61] "metrics-server-569cc877fc-v94xq" [a34d0cd0-1049-4cb4-ae4b-d0c8d34fda13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:56.671959   70231 system_pods.go:61] "storage-provisioner" [a10d68bf-f23d-4871-9041-1e66aa089342] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:47:56.671967   70231 system_pods.go:74] duration metric: took 10.316696ms to wait for pod list to return data ...
	I0729 11:47:56.671974   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:56.677342   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:56.677368   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:56.677380   70231 node_conditions.go:105] duration metric: took 5.400925ms to run NodePressure ...
	I0729 11:47:56.677400   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:56.985230   70231 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990270   70231 kubeadm.go:739] kubelet initialised
	I0729 11:47:56.990297   70231 kubeadm.go:740] duration metric: took 5.038002ms waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990307   70231 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:56.995626   70231 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.002678   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002729   70231 pod_ready.go:81] duration metric: took 7.079039ms for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.002742   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002749   70231 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.007474   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007500   70231 pod_ready.go:81] duration metric: took 4.741617ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.007510   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007516   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.012437   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012464   70231 pod_ready.go:81] duration metric: took 4.941759ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.012474   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012480   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.065060   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065103   70231 pod_ready.go:81] duration metric: took 52.614137ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.065124   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065133   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465390   70231 pod_ready.go:92] pod "kube-proxy-4bbt5" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:57.465414   70231 pod_ready.go:81] duration metric: took 400.26956ms for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465423   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:59.475067   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.904201   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:56.904719   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:56.904754   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:56.904677   71329 retry.go:31] will retry after 4.224733299s: waiting for machine to come up
	I0729 11:48:02.473126   69419 start.go:364] duration metric: took 55.472263119s to acquireMachinesLock for "no-preload-297799"
	I0729 11:48:02.473181   69419 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:48:02.473195   69419 fix.go:54] fixHost starting: 
	I0729 11:48:02.473581   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:48:02.473611   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:48:02.491458   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0729 11:48:02.491939   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:48:02.492393   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:48:02.492411   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:48:02.492790   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:48:02.492983   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:02.493133   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:48:02.494640   69419 fix.go:112] recreateIfNeeded on no-preload-297799: state=Stopped err=<nil>
	I0729 11:48:02.494666   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	W0729 11:48:02.494878   69419 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:48:02.497014   69419 out.go:177] * Restarting existing kvm2 VM for "no-preload-297799" ...
	I0729 11:47:59.514514   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.515573   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.516078   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.132270   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132669   70480 main.go:141] libmachine: (old-k8s-version-188043) Found IP for machine: 192.168.72.61
	I0729 11:48:01.132695   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has current primary IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132702   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserving static IP address...
	I0729 11:48:01.133140   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.133173   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | skip adding static IP to network mk-old-k8s-version-188043 - found existing host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"}
	I0729 11:48:01.133189   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserved static IP address: 192.168.72.61
	I0729 11:48:01.133203   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting for SSH to be available...
	I0729 11:48:01.133217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Getting to WaitForSSH function...
	I0729 11:48:01.135427   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135759   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.135786   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135866   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH client type: external
	I0729 11:48:01.135894   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa (-rw-------)
	I0729 11:48:01.135931   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:01.135949   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | About to run SSH command:
	I0729 11:48:01.135986   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | exit 0
	I0729 11:48:01.262756   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:01.263165   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetConfigRaw
	I0729 11:48:01.263828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.266322   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266662   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.266689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266930   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:48:01.267115   70480 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:01.267132   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:01.267357   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.269973   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270331   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.270354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270502   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.270686   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270863   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.271183   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.271391   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.271405   70480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:01.383463   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:01.383492   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383771   70480 buildroot.go:166] provisioning hostname "old-k8s-version-188043"
	I0729 11:48:01.383795   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.387076   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387411   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.387449   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387583   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.387776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.387929   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.388052   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.388237   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.388396   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.388409   70480 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-188043 && echo "old-k8s-version-188043" | sudo tee /etc/hostname
	I0729 11:48:01.519219   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-188043
	
	I0729 11:48:01.519252   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.521972   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522356   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.522385   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522533   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.522755   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.522955   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.523074   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.523276   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.523452   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.523470   70480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-188043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-188043/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-188043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:01.644387   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:01.644421   70480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:01.644468   70480 buildroot.go:174] setting up certificates
	I0729 11:48:01.644480   70480 provision.go:84] configureAuth start
	I0729 11:48:01.644499   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.644781   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.647721   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648133   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.648162   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648322   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.650422   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.650857   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.650883   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.651028   70480 provision.go:143] copyHostCerts
	I0729 11:48:01.651088   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:01.651101   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:01.651160   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:01.651249   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:01.651257   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:01.651277   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:01.651329   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:01.651336   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:01.651352   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:01.651408   70480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-188043 san=[127.0.0.1 192.168.72.61 localhost minikube old-k8s-version-188043]
	I0729 11:48:01.754387   70480 provision.go:177] copyRemoteCerts
	I0729 11:48:01.754442   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:01.754468   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.757420   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.757770   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.757803   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.758031   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.758220   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.758416   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.758574   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:01.845306   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:01.872221   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 11:48:01.897935   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:01.924742   70480 provision.go:87] duration metric: took 280.248018ms to configureAuth
	I0729 11:48:01.924780   70480 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:01.924957   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:48:01.925042   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.927450   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.927780   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.927949   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.927956   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.928160   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928344   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928511   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.928677   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.928831   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.928850   70480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:02.213344   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:02.213375   70480 machine.go:97] duration metric: took 946.247614ms to provisionDockerMachine
	I0729 11:48:02.213405   70480 start.go:293] postStartSetup for "old-k8s-version-188043" (driver="kvm2")
	I0729 11:48:02.213422   70480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:02.213469   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.213811   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:02.213869   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.216897   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217219   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.217253   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217388   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.217603   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.217776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.217957   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.305894   70480 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:02.310875   70480 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:02.310907   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:02.310982   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:02.311105   70480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:02.311264   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:02.321616   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:02.349732   70480 start.go:296] duration metric: took 136.310898ms for postStartSetup
	I0729 11:48:02.349788   70480 fix.go:56] duration metric: took 19.724598498s for fixHost
	I0729 11:48:02.349828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.352855   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353194   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.353226   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353348   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.353575   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353802   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353983   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.354172   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:02.354428   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:02.354445   70480 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:02.472983   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253682.444336856
	
	I0729 11:48:02.473008   70480 fix.go:216] guest clock: 1722253682.444336856
	I0729 11:48:02.473017   70480 fix.go:229] Guest: 2024-07-29 11:48:02.444336856 +0000 UTC Remote: 2024-07-29 11:48:02.349793034 +0000 UTC m=+207.164903125 (delta=94.543822ms)
	I0729 11:48:02.473042   70480 fix.go:200] guest clock delta is within tolerance: 94.543822ms
	I0729 11:48:02.473049   70480 start.go:83] releasing machines lock for "old-k8s-version-188043", held for 19.847898477s
	I0729 11:48:02.473077   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.473414   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:02.476555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.476980   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.477010   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.477343   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.477892   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478095   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478187   70480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:02.478230   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.478285   70480 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:02.478331   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.481065   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481223   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481484   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481626   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.481665   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481705   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481845   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.481958   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.482032   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482114   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.482302   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482316   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.482466   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.569312   70480 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:02.593778   70480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:02.744117   70480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:02.752292   70480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:02.752380   70480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:02.774488   70480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:02.774515   70480 start.go:495] detecting cgroup driver to use...
	I0729 11:48:02.774581   70480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:02.793491   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:02.814235   70480 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:02.814294   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:02.832545   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:02.848433   70480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:02.980201   70480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:03.163341   70480 docker.go:233] disabling docker service ...
	I0729 11:48:03.163420   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:03.178758   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:03.197879   70480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:03.331372   70480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:03.462987   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:03.480583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:03.504239   70480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 11:48:03.504312   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.518440   70480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:03.518494   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.532072   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.543435   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.555232   70480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:03.568372   70480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:03.582423   70480 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:03.582482   70480 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:03.601891   70480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:03.614380   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:03.741008   70480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:03.896807   70480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:03.896878   70480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:03.903541   70480 start.go:563] Will wait 60s for crictl version
	I0729 11:48:03.903604   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:03.908408   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:03.952850   70480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:03.952946   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:03.984204   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:04.018650   70480 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 11:48:02.498447   69419 main.go:141] libmachine: (no-preload-297799) Calling .Start
	I0729 11:48:02.498626   69419 main.go:141] libmachine: (no-preload-297799) Ensuring networks are active...
	I0729 11:48:02.499540   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network default is active
	I0729 11:48:02.499967   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network mk-no-preload-297799 is active
	I0729 11:48:02.500446   69419 main.go:141] libmachine: (no-preload-297799) Getting domain xml...
	I0729 11:48:02.501250   69419 main.go:141] libmachine: (no-preload-297799) Creating domain...
	I0729 11:48:03.852498   69419 main.go:141] libmachine: (no-preload-297799) Waiting to get IP...
	I0729 11:48:03.853523   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:03.853951   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:03.854006   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:03.853917   71505 retry.go:31] will retry after 199.060788ms: waiting for machine to come up
	I0729 11:48:04.054348   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.054940   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.054968   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.054888   71505 retry.go:31] will retry after 285.962971ms: waiting for machine to come up
	I0729 11:48:04.342491   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.343050   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.343075   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.343003   71505 retry.go:31] will retry after 363.613745ms: waiting for machine to come up
	I0729 11:48:01.973091   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.972466   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:03.972492   70231 pod_ready.go:81] duration metric: took 6.507061375s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:03.972504   70231 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:04.020067   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:04.023182   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023542   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:04.023571   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023796   70480 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:04.028450   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:04.042324   70480 kubeadm.go:883] updating cluster {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:04.042474   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:48:04.042540   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:04.092644   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:04.092699   70480 ssh_runner.go:195] Run: which lz4
	I0729 11:48:04.096834   70480 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:48:04.101297   70480 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:48:04.101328   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 11:48:05.518740   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:08.014306   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:04.708829   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.709447   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.709480   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.709349   71505 retry.go:31] will retry after 458.384125ms: waiting for machine to come up
	I0729 11:48:05.169214   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.169896   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.169930   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.169845   71505 retry.go:31] will retry after 647.103993ms: waiting for machine to come up
	I0729 11:48:05.818415   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.819017   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.819043   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.818969   71505 retry.go:31] will retry after 857.973397ms: waiting for machine to come up
	I0729 11:48:06.678181   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:06.678732   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:06.678756   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:06.678668   71505 retry.go:31] will retry after 928.705904ms: waiting for machine to come up
	I0729 11:48:07.609326   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:07.609866   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:07.609890   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:07.609822   71505 retry.go:31] will retry after 1.262269934s: waiting for machine to come up
	I0729 11:48:08.874373   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:08.874820   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:08.874850   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:08.874758   71505 retry.go:31] will retry after 1.824043731s: waiting for machine to come up
	I0729 11:48:05.980579   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:07.982513   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:05.929023   70480 crio.go:462] duration metric: took 1.832213096s to copy over tarball
	I0729 11:48:05.929116   70480 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:48:09.096321   70480 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.167180641s)
	I0729 11:48:09.096346   70480 crio.go:469] duration metric: took 3.16729016s to extract the tarball
	I0729 11:48:09.096353   70480 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:48:09.154049   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:09.193067   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:09.193104   70480 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:09.193246   70480 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.193211   70480 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.193280   70480 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 11:48:09.193282   70480 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.193298   70480 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.193217   70480 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.193270   70480 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.193261   70480 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.194797   70480 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194815   70480 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.194846   70480 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.194885   70480 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 11:48:09.194789   70480 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.195165   70480 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.412658   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 11:48:09.423832   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.465209   70480 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 11:48:09.465256   70480 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 11:48:09.465312   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.475896   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 11:48:09.475946   70480 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 11:48:09.475983   70480 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.476028   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.511579   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.511606   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 11:48:09.547233   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 11:48:09.554773   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.566944   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.567736   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.574369   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.587705   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.650871   70480 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 11:48:09.650920   70480 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.650969   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692828   70480 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 11:48:09.692876   70480 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 11:48:09.692913   70480 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.692884   70480 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.692968   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692989   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.711985   70480 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 11:48:09.712027   70480 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.712073   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.713847   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.713883   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.713918   70480 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 11:48:09.713950   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.713959   70480 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.713997   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.718719   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.820952   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 11:48:09.820988   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 11:48:09.821051   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.821058   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 11:48:09.821142   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 11:48:09.861235   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 11:48:10.107019   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:10.014549   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.016206   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.701733   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:10.702238   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:10.702283   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:10.702199   71505 retry.go:31] will retry after 2.128592394s: waiting for machine to come up
	I0729 11:48:12.832803   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:12.833342   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:12.833364   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:12.833290   71505 retry.go:31] will retry after 2.45224359s: waiting for machine to come up
	I0729 11:48:10.479461   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.482426   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:14.978814   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.251097   70480 cache_images.go:92] duration metric: took 1.057971213s to LoadCachedImages
	W0729 11:48:10.251194   70480 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0729 11:48:10.251210   70480 kubeadm.go:934] updating node { 192.168.72.61 8443 v1.20.0 crio true true} ...
	I0729 11:48:10.251341   70480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-188043 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:10.251447   70480 ssh_runner.go:195] Run: crio config
	I0729 11:48:10.310573   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:48:10.310594   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:10.310601   70480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:10.310618   70480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.61 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-188043 NodeName:old-k8s-version-188043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 11:48:10.310786   70480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-188043"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:10.310847   70480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 11:48:10.321378   70480 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:10.321463   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:10.331285   70480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 11:48:10.348814   70480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:48:10.368593   70480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 11:48:10.387619   70480 ssh_runner.go:195] Run: grep 192.168.72.61	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:10.392096   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:10.405499   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:10.532293   70480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:10.549883   70480 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043 for IP: 192.168.72.61
	I0729 11:48:10.549910   70480 certs.go:194] generating shared ca certs ...
	I0729 11:48:10.549930   70480 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:10.550110   70480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:10.550166   70480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:10.550180   70480 certs.go:256] generating profile certs ...
	I0729 11:48:10.550299   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.key
	I0729 11:48:10.550376   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4
	I0729 11:48:10.550428   70480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key
	I0729 11:48:10.550564   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:10.550604   70480 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:10.550617   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:10.550648   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:10.550678   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:10.550730   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:10.550787   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:10.551421   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:10.588571   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:10.644840   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:10.697757   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:10.755085   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 11:48:10.800901   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:10.834650   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:10.866662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:10.895657   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:10.923565   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:10.948662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:10.976094   70480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:10.998391   70480 ssh_runner.go:195] Run: openssl version
	I0729 11:48:11.006024   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:11.019080   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024428   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024498   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.031468   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:11.043919   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:11.055933   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061208   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061284   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.067590   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:11.079323   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:11.091404   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096768   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096838   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.103571   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:11.116286   70480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:11.121569   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:11.128426   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:11.134679   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:11.141595   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:11.148605   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:11.155402   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:11.162290   70480 kubeadm.go:392] StartCluster: {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:11.162394   70480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:11.162441   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.206880   70480 cri.go:89] found id: ""
	I0729 11:48:11.206962   70480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:11.218154   70480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:11.218187   70480 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:11.218253   70480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:11.228734   70480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:11.229980   70480 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:48:11.230942   70480 kubeconfig.go:62] /home/jenkins/minikube-integration/19337-3845/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-188043" cluster setting kubeconfig missing "old-k8s-version-188043" context setting]
	I0729 11:48:11.231876   70480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:11.347256   70480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:11.358515   70480 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.61
	I0729 11:48:11.358654   70480 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:11.358668   70480 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:11.358753   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.402396   70480 cri.go:89] found id: ""
	I0729 11:48:11.402470   70480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:11.420623   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:11.431963   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:11.431989   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:11.432060   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:11.442517   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:11.442584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:11.454407   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:11.465534   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:11.465607   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:11.477716   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.489553   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:11.489625   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.501776   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:11.514863   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:11.514931   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:11.526206   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:11.536583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:11.674846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:12.763352   70480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.088463809s)
	I0729 11:48:12.763396   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.039246   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.168621   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.295910   70480 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:13.296008   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:13.797084   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.296523   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.796520   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.515092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:17.014806   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.287937   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:15.288420   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:15.288447   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:15.288378   71505 retry.go:31] will retry after 2.298011171s: waiting for machine to come up
	I0729 11:48:17.587882   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:17.588283   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:17.588317   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:17.588242   71505 retry.go:31] will retry after 3.770149633s: waiting for machine to come up
	I0729 11:48:16.979006   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:18.979673   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.296251   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:15.796738   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.296361   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.796755   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.296237   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.796188   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.296099   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.796433   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.296864   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.796931   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.514721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.515056   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.515218   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.363217   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363766   69419 main.go:141] libmachine: (no-preload-297799) Found IP for machine: 192.168.39.120
	I0729 11:48:21.363823   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has current primary IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363832   69419 main.go:141] libmachine: (no-preload-297799) Reserving static IP address...
	I0729 11:48:21.364272   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.364319   69419 main.go:141] libmachine: (no-preload-297799) DBG | skip adding static IP to network mk-no-preload-297799 - found existing host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"}
	I0729 11:48:21.364334   69419 main.go:141] libmachine: (no-preload-297799) Reserved static IP address: 192.168.39.120
	I0729 11:48:21.364351   69419 main.go:141] libmachine: (no-preload-297799) Waiting for SSH to be available...
	I0729 11:48:21.364386   69419 main.go:141] libmachine: (no-preload-297799) DBG | Getting to WaitForSSH function...
	I0729 11:48:21.366601   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.366955   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.366998   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.367110   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH client type: external
	I0729 11:48:21.367157   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa (-rw-------)
	I0729 11:48:21.367203   69419 main.go:141] libmachine: (no-preload-297799) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:21.367222   69419 main.go:141] libmachine: (no-preload-297799) DBG | About to run SSH command:
	I0729 11:48:21.367233   69419 main.go:141] libmachine: (no-preload-297799) DBG | exit 0
	I0729 11:48:21.494963   69419 main.go:141] libmachine: (no-preload-297799) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:21.495323   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetConfigRaw
	I0729 11:48:21.495901   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.498624   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499005   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.499033   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499332   69419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/config.json ...
	I0729 11:48:21.499542   69419 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:21.499561   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:21.499749   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.501857   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502237   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.502259   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502360   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.502527   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502693   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502852   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.503009   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.503209   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.503226   69419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:21.614994   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:21.615026   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615271   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:48:21.615299   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615483   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.617734   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618050   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.618082   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618192   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.618378   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618539   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618640   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.618818   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.619004   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.619019   69419 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-297799 && echo "no-preload-297799" | sudo tee /etc/hostname
	I0729 11:48:21.747538   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-297799
	
	I0729 11:48:21.747567   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.750275   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750618   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.750649   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750791   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.751003   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751179   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751302   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.751508   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.751695   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.751716   69419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-297799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-297799/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-297799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:21.877638   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:21.877665   69419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:21.877688   69419 buildroot.go:174] setting up certificates
	I0729 11:48:21.877699   69419 provision.go:84] configureAuth start
	I0729 11:48:21.877710   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.877988   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.880318   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880703   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.880730   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880918   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.883184   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883495   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.883525   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883645   69419 provision.go:143] copyHostCerts
	I0729 11:48:21.883693   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:21.883702   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:21.883757   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:21.883845   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:21.883852   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:21.883872   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:21.883925   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:21.883932   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:21.883948   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:21.883992   69419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.no-preload-297799 san=[127.0.0.1 192.168.39.120 localhost minikube no-preload-297799]
	I0729 11:48:22.283775   69419 provision.go:177] copyRemoteCerts
	I0729 11:48:22.283828   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:22.283854   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.286584   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.286954   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.286981   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.287114   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.287333   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.287503   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.287643   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.373551   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:22.401345   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 11:48:22.427243   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:22.452826   69419 provision.go:87] duration metric: took 575.112676ms to configureAuth
	I0729 11:48:22.452864   69419 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:22.453068   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:48:22.453140   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.455748   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456205   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.456237   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456444   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.456664   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456824   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456980   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.457113   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.457317   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.457340   69419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:22.736637   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:22.736667   69419 machine.go:97] duration metric: took 1.237111694s to provisionDockerMachine
	I0729 11:48:22.736682   69419 start.go:293] postStartSetup for "no-preload-297799" (driver="kvm2")
	I0729 11:48:22.736697   69419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:22.736716   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.737054   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:22.737080   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.739895   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740266   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.740299   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740437   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.740660   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.740810   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.740981   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.825483   69419 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:22.829745   69419 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:22.829765   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:22.829844   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:22.829961   69419 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:22.830063   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:22.839702   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:22.864154   69419 start.go:296] duration metric: took 127.451011ms for postStartSetup
	I0729 11:48:22.864200   69419 fix.go:56] duration metric: took 20.391004348s for fixHost
	I0729 11:48:22.864225   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.867047   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867522   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.867547   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867685   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.867897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868100   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868278   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.868442   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.868619   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.868634   69419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:22.979862   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253702.953940258
	
	I0729 11:48:22.979883   69419 fix.go:216] guest clock: 1722253702.953940258
	I0729 11:48:22.979892   69419 fix.go:229] Guest: 2024-07-29 11:48:22.953940258 +0000 UTC Remote: 2024-07-29 11:48:22.864205522 +0000 UTC m=+358.454662216 (delta=89.734736ms)
	I0729 11:48:22.979909   69419 fix.go:200] guest clock delta is within tolerance: 89.734736ms
	I0729 11:48:22.979916   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 20.506763382s
	I0729 11:48:22.979934   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.980178   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:22.983034   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983379   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.983407   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983569   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984174   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984345   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984440   69419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:22.984481   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.984593   69419 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:22.984620   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.987121   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987251   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987503   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987530   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987631   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987653   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987657   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987846   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.987853   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987984   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.988013   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988070   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988193   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.988190   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:23.101778   69419 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:23.108052   69419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:23.255523   69419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:23.261797   69419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:23.261872   69419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:23.279975   69419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:23.280003   69419 start.go:495] detecting cgroup driver to use...
	I0729 11:48:23.280070   69419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:23.296550   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:23.312947   69419 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:23.313014   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:23.327611   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:23.341549   69419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:23.465776   69419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:23.613763   69419 docker.go:233] disabling docker service ...
	I0729 11:48:23.613827   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:23.628485   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:23.641792   69419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:23.775749   69419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:23.912809   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:23.927782   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:23.947081   69419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 11:48:23.947153   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.957920   69419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:23.958002   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.968380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.979429   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.990529   69419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:24.001380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.012490   69419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.031852   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.042914   69419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:24.052901   69419 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:24.052958   69419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:24.065797   69419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:24.075298   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:24.212796   69419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:24.364082   69419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:24.364169   69419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:24.369778   69419 start.go:563] Will wait 60s for crictl version
	I0729 11:48:24.369838   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.373750   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:24.417141   69419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:24.417232   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.447170   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.491940   69419 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 11:48:21.481453   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.482213   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:20.296052   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:20.796633   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.296412   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.797025   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.296524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.796719   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.296741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.796133   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.296709   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.796699   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.515715   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:27.515900   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:24.493306   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:24.495927   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496432   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:24.496479   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496678   69419 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:24.501092   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:24.516305   69419 kubeadm.go:883] updating cluster {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:24.516452   69419 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 11:48:24.516524   69419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:24.558195   69419 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 11:48:24.558221   69419 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:24.558261   69419 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.558295   69419 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.558340   69419 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.558344   69419 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.558377   69419 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 11:48:24.558394   69419 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.558441   69419 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.558359   69419 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 11:48:24.559657   69419 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.559681   69419 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.559700   69419 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.559628   69419 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.559635   69419 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.559896   69419 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.717545   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.722347   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.724891   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.736099   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.738159   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.746232   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 11:48:24.754163   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.781677   69419 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 11:48:24.781726   69419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.781777   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.850443   69419 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 11:48:24.850478   69419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.850527   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.872953   69419 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 11:48:24.872991   69419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.873031   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908765   69419 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 11:48:24.908814   69419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.908869   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908933   69419 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 11:48:24.908969   69419 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.909008   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006764   69419 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 11:48:25.006808   69419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.006862   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006897   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:25.006908   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:25.006942   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:25.006982   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:25.007025   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:25.108737   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 11:48:25.108786   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.108843   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.109411   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109455   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109473   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 11:48:25.109491   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109530   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109543   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:25.124038   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.124154   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.161374   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161395   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 11:48:25.161411   69419 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161435   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161455   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161483   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161495   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161463   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 11:48:25.161532   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 11:48:25.430934   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983350   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (3.821838647s)
	I0729 11:48:28.983392   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 11:48:28.983487   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.822003707s)
	I0729 11:48:28.983512   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 11:48:28.983529   69419 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.552560815s)
	I0729 11:48:28.983541   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983566   69419 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 11:48:28.983600   69419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983615   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983636   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.981755   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:28.481454   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:25.296898   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.796712   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.297094   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.796384   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.296247   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.296890   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.796799   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.296947   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.015895   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:32.537283   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.876700   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.893055249s)
	I0729 11:48:30.876727   69419 ssh_runner.go:235] Completed: which crictl: (1.893072604s)
	I0729 11:48:30.876791   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:30.876737   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 11:48:30.876867   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.876921   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.925907   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 11:48:30.926007   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:32.689310   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.812361674s)
	I0729 11:48:32.689348   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 11:48:32.689380   69419 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689330   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.763306985s)
	I0729 11:48:32.689433   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689437   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 11:48:30.979444   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:33.480260   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.297007   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.797055   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.296172   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.796379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.296834   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.796689   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.296129   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.796275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.297038   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.014380   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:37.015050   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:34.662663   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.973206225s)
	I0729 11:48:34.662715   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 11:48:34.662742   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:34.662794   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:36.619459   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.956638565s)
	I0729 11:48:36.619486   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 11:48:36.619509   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:36.619565   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:38.577482   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.95789492s)
	I0729 11:48:38.577507   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 11:48:38.577529   69419 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:38.577568   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:39.229623   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 11:48:39.229672   69419 cache_images.go:123] Successfully loaded all cached images
	I0729 11:48:39.229679   69419 cache_images.go:92] duration metric: took 14.67144672s to LoadCachedImages
	I0729 11:48:39.229693   69419 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.0-beta.0 crio true true} ...
	I0729 11:48:39.229817   69419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-297799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:39.229881   69419 ssh_runner.go:195] Run: crio config
	I0729 11:48:39.275907   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:39.275926   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:39.275934   69419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:39.275954   69419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-297799 NodeName:no-preload-297799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:48:39.276122   69419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-297799"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:39.276192   69419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 11:48:39.286552   69419 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:39.286610   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:39.296058   69419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 11:48:39.318154   69419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 11:48:39.335437   69419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 11:48:39.354036   69419 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:39.358009   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:39.370253   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:35.994913   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:38.483330   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:35.297061   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.796828   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.296156   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.796245   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.297045   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.796103   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.296453   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.796146   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.296670   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.796357   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.016488   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:41.515245   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:39.512699   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:39.531458   69419 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799 for IP: 192.168.39.120
	I0729 11:48:39.531482   69419 certs.go:194] generating shared ca certs ...
	I0729 11:48:39.531502   69419 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:39.531676   69419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:39.531730   69419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:39.531743   69419 certs.go:256] generating profile certs ...
	I0729 11:48:39.531841   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/client.key
	I0729 11:48:39.531928   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key.7b715e25
	I0729 11:48:39.531975   69419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key
	I0729 11:48:39.532117   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:39.532153   69419 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:39.532167   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:39.532197   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:39.532227   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:39.532258   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:39.532304   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:39.532940   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:39.571271   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:39.596824   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:39.622112   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:39.655054   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 11:48:39.693252   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:39.717845   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:39.746725   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:39.772098   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:39.798075   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:39.824675   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:39.849863   69419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:39.867759   69419 ssh_runner.go:195] Run: openssl version
	I0729 11:48:39.874159   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:39.885596   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890166   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890229   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.896413   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:39.907803   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:39.920270   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925216   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925279   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.931316   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:39.942774   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:39.954592   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959366   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959422   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.965437   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:39.976951   69419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:39.983054   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:39.989909   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:39.995930   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:40.002178   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:40.008426   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:40.014841   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:40.021729   69419 kubeadm.go:392] StartCluster: {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:40.021848   69419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:40.021908   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.075370   69419 cri.go:89] found id: ""
	I0729 11:48:40.075473   69419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:40.086268   69419 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:40.086293   69419 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:40.086367   69419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:40.097168   69419 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:40.098369   69419 kubeconfig.go:125] found "no-preload-297799" server: "https://192.168.39.120:8443"
	I0729 11:48:40.100676   69419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:40.111832   69419 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I0729 11:48:40.111874   69419 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:40.111885   69419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:40.111927   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.151936   69419 cri.go:89] found id: ""
	I0729 11:48:40.152000   69419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:40.170773   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:40.181342   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:40.181363   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:40.181408   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:40.190984   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:40.191052   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:40.200668   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:40.209597   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:40.209645   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:40.219194   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.228788   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:40.228861   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.238965   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:40.248308   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:40.248390   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:40.257904   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:40.267645   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:40.379761   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.272628   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.487426   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.563792   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.657159   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:41.657265   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.158209   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.657442   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.712325   69419 api_server.go:72] duration metric: took 1.055172636s to wait for apiserver process to appear ...
	I0729 11:48:42.712357   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:48:42.712378   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:40.978804   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:42.979615   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:40.296481   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:40.796161   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.296479   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.796634   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.296314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.796986   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.297060   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.296048   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.796488   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.619558   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.619623   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.619639   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.629929   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.629961   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.713181   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.764383   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.764415   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:46.213129   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.217584   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.217613   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:46.713358   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.719382   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.719421   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:47.212915   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:47.218414   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:48:47.230158   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:48:47.230187   69419 api_server.go:131] duration metric: took 4.517823741s to wait for apiserver health ...
	I0729 11:48:47.230197   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:47.230203   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:47.232409   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:48:44.015604   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:46.514213   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:48.514660   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.233803   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:48:47.254784   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:48:47.278258   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:48:47.307307   69419 system_pods.go:59] 8 kube-system pods found
	I0729 11:48:47.307354   69419 system_pods.go:61] "coredns-5cfdc65f69-qz5f7" [12c37abb-1db8-4c96-8bf7-be9487c821df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:48:47.307368   69419 system_pods.go:61] "etcd-no-preload-297799" [95565d29-e8c5-4f33-84d9-a2604d25440d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:48:47.307380   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [870e0ec0-87db-4fee-b8ba-d08654d036de] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:48:47.307389   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [12bf09f7-8084-47fb-b268-c9eccf906ce8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:48:47.307397   69419 system_pods.go:61] "kube-proxy-ggh4w" [5455f099-4470-4551-864e-5e855b77f94f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:48:47.307405   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [e88dae86-cfc6-456f-b14a-ebaaeac5f758] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:48:47.307416   69419 system_pods.go:61] "metrics-server-78fcd8795b-x4t76" [874f9fbe-8ded-48ba-993d-53cbded78379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:48:47.307423   69419 system_pods.go:61] "storage-provisioner" [8ca54feb-faf5-4e75-aef5-b7c57b89c429] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:48:47.307434   69419 system_pods.go:74] duration metric: took 29.153842ms to wait for pod list to return data ...
	I0729 11:48:47.307447   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:48:47.324625   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:48:47.324677   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:48:47.324691   69419 node_conditions.go:105] duration metric: took 17.237885ms to run NodePressure ...
	I0729 11:48:47.324711   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:47.612726   69419 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619335   69419 kubeadm.go:739] kubelet initialised
	I0729 11:48:47.619356   69419 kubeadm.go:740] duration metric: took 6.608982ms waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619364   69419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:48:47.625462   69419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:45.479610   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.481743   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.978596   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:45.297079   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.796411   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.297077   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.796676   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.296378   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.796359   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.296252   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.796407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.296669   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.796307   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.516689   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:53.016717   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.632321   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.131647   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.633099   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:52.633127   69419 pod_ready.go:81] duration metric: took 5.007638065s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.633136   69419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.480576   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.979758   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:50.296349   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.796922   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.296977   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.797050   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.296223   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.796774   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.296621   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.796300   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.296480   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.796902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.515017   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:57.515244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.640065   69419 pod_ready.go:102] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.648288   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.648318   69419 pod_ready.go:81] duration metric: took 4.015175534s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.648327   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.653979   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.654012   69419 pod_ready.go:81] duration metric: took 5.676586ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.654027   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664507   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.664533   69419 pod_ready.go:81] duration metric: took 10.499453ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664544   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669414   69419 pod_ready.go:92] pod "kube-proxy-ggh4w" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.669439   69419 pod_ready.go:81] duration metric: took 4.888994ms for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669449   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673888   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.673913   69419 pod_ready.go:81] duration metric: took 4.457007ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673924   69419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:58.682501   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.982680   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:59.479587   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:55.296128   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.796141   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.296196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.796435   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.296155   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.796741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.796190   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.296902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.797062   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.013753   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:02.014435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.180620   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.183481   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.481530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.978979   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:00.296074   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.796430   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.296402   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.796722   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.296594   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.297020   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.796865   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.296072   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.796318   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.015636   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:06.514933   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.681102   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.681462   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.979240   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.979773   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.979865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.296957   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:05.796485   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.296051   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.296457   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.796342   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.796449   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.297078   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.796713   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.014934   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:11.515032   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:13.515665   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.683191   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.181155   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.182012   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.482327   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.979064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:10.296959   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:10.796677   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.296975   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.796715   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.296262   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.796937   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:13.296939   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:13.297014   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:13.340009   70480 cri.go:89] found id: ""
	I0729 11:49:13.340034   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.340041   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:13.340047   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:13.340112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:13.378003   70480 cri.go:89] found id: ""
	I0729 11:49:13.378029   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.378037   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:13.378044   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:13.378098   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:13.417130   70480 cri.go:89] found id: ""
	I0729 11:49:13.417162   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.417169   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:13.417175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:13.417253   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:13.461969   70480 cri.go:89] found id: ""
	I0729 11:49:13.462001   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.462012   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:13.462019   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:13.462089   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:13.502341   70480 cri.go:89] found id: ""
	I0729 11:49:13.502369   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.502375   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:13.502382   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:13.502434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:13.538099   70480 cri.go:89] found id: ""
	I0729 11:49:13.538123   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.538137   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:13.538143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:13.538192   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:13.575136   70480 cri.go:89] found id: ""
	I0729 11:49:13.575165   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.575172   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:13.575180   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:13.575241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:13.609066   70480 cri.go:89] found id: ""
	I0729 11:49:13.609098   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.609106   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:13.609114   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:13.609126   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:13.622813   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:13.622842   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:13.765497   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:13.765519   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:13.765532   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:13.831517   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:13.831554   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:13.877738   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:13.877771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.015086   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:18.514995   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.683827   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.180229   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.979975   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.479362   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.429475   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:16.447454   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:16.447528   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:16.486991   70480 cri.go:89] found id: ""
	I0729 11:49:16.487018   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.487028   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:16.487036   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:16.487095   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:16.526155   70480 cri.go:89] found id: ""
	I0729 11:49:16.526180   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.526187   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:16.526192   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:16.526251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:16.562054   70480 cri.go:89] found id: ""
	I0729 11:49:16.562080   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.562156   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:16.562175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:16.562229   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:16.598866   70480 cri.go:89] found id: ""
	I0729 11:49:16.598896   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.598907   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:16.598915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:16.598984   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:16.637590   70480 cri.go:89] found id: ""
	I0729 11:49:16.637615   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.637623   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:16.637628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:16.637677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:16.676712   70480 cri.go:89] found id: ""
	I0729 11:49:16.676738   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.676749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:16.676756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:16.676844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:16.717213   70480 cri.go:89] found id: ""
	I0729 11:49:16.717242   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.717250   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:16.717256   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:16.717309   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:16.755619   70480 cri.go:89] found id: ""
	I0729 11:49:16.755644   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.755652   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:16.755660   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:16.755677   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:16.825987   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:16.826023   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:16.869351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:16.869386   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.920850   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:16.920888   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:16.935884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:16.935921   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:17.021524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.521810   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:19.534761   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:19.534826   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:19.571988   70480 cri.go:89] found id: ""
	I0729 11:49:19.572019   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.572029   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:19.572037   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:19.572097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:19.607389   70480 cri.go:89] found id: ""
	I0729 11:49:19.607418   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.607427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:19.607434   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:19.607496   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:19.645810   70480 cri.go:89] found id: ""
	I0729 11:49:19.645842   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.645853   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:19.645861   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:19.645924   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:19.683628   70480 cri.go:89] found id: ""
	I0729 11:49:19.683655   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.683663   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:19.683669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:19.683715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:19.719396   70480 cri.go:89] found id: ""
	I0729 11:49:19.719424   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.719435   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:19.719442   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:19.719503   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:19.755341   70480 cri.go:89] found id: ""
	I0729 11:49:19.755372   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.755383   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:19.755390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:19.755443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:19.792562   70480 cri.go:89] found id: ""
	I0729 11:49:19.792594   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.792604   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:19.792611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:19.792674   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:19.826783   70480 cri.go:89] found id: ""
	I0729 11:49:19.826808   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.826815   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:19.826824   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:19.826835   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:19.878538   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:19.878573   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:19.893066   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:19.893094   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:19.966152   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.966177   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:19.966190   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:20.042796   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:20.042831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:20.515422   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.016350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.681192   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.681786   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.486048   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.979078   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:22.581639   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:22.595713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:22.595791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:22.639198   70480 cri.go:89] found id: ""
	I0729 11:49:22.639227   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.639239   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:22.639247   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:22.639304   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:22.681073   70480 cri.go:89] found id: ""
	I0729 11:49:22.681103   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.681117   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:22.681124   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:22.681183   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:22.717186   70480 cri.go:89] found id: ""
	I0729 11:49:22.717216   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.717226   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:22.717233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:22.717293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:22.755508   70480 cri.go:89] found id: ""
	I0729 11:49:22.755536   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.755546   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:22.755563   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:22.755626   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:22.800450   70480 cri.go:89] found id: ""
	I0729 11:49:22.800484   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.800495   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:22.800503   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:22.800567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:22.845555   70480 cri.go:89] found id: ""
	I0729 11:49:22.845581   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.845588   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:22.845594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:22.845643   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:22.895449   70480 cri.go:89] found id: ""
	I0729 11:49:22.895476   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.895483   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:22.895488   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:22.895536   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:22.943041   70480 cri.go:89] found id: ""
	I0729 11:49:22.943071   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.943081   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:22.943092   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:22.943108   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:23.002403   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:23.002448   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:23.019436   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:23.019463   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:23.090680   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:23.090718   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:23.090734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:23.173647   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:23.173687   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:25.515416   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.014796   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.181898   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.680932   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.481482   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.980230   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:25.719961   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:25.733760   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:25.733829   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:25.769965   70480 cri.go:89] found id: ""
	I0729 11:49:25.769997   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.770008   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:25.770015   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:25.770079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:25.805786   70480 cri.go:89] found id: ""
	I0729 11:49:25.805818   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.805829   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:25.805836   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:25.805899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:25.841948   70480 cri.go:89] found id: ""
	I0729 11:49:25.841978   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.841988   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:25.841996   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:25.842056   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:25.880600   70480 cri.go:89] found id: ""
	I0729 11:49:25.880626   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.880636   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:25.880644   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:25.880710   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:25.918637   70480 cri.go:89] found id: ""
	I0729 11:49:25.918671   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.918683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:25.918691   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:25.918766   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:25.955683   70480 cri.go:89] found id: ""
	I0729 11:49:25.955716   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.955726   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:25.955733   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:25.955793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:25.991796   70480 cri.go:89] found id: ""
	I0729 11:49:25.991826   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.991835   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:25.991844   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:25.991908   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:26.027595   70480 cri.go:89] found id: ""
	I0729 11:49:26.027623   70480 logs.go:276] 0 containers: []
	W0729 11:49:26.027634   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:26.027644   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:26.027658   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:26.114463   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:26.114500   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:26.156798   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:26.156834   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:26.206910   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:26.206940   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:26.221037   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:26.221065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:26.293788   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:28.794321   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:28.809573   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:28.809632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:28.860918   70480 cri.go:89] found id: ""
	I0729 11:49:28.860945   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.860952   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:28.860958   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:28.861011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:28.913038   70480 cri.go:89] found id: ""
	I0729 11:49:28.913069   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.913078   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:28.913085   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:28.913147   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:28.963679   70480 cri.go:89] found id: ""
	I0729 11:49:28.963704   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.963714   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:28.963722   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:28.963787   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:29.003935   70480 cri.go:89] found id: ""
	I0729 11:49:29.003962   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.003970   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:29.003976   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:29.004033   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:29.044990   70480 cri.go:89] found id: ""
	I0729 11:49:29.045027   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.045034   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:29.045040   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:29.045096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:29.079923   70480 cri.go:89] found id: ""
	I0729 11:49:29.079945   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.079953   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:29.079958   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:29.080004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:29.117495   70480 cri.go:89] found id: ""
	I0729 11:49:29.117520   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.117528   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:29.117534   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:29.117580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:29.155254   70480 cri.go:89] found id: ""
	I0729 11:49:29.155285   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.155295   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:29.155305   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:29.155319   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:29.207659   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:29.207698   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:29.221875   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:29.221904   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:29.295613   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:29.295637   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:29.295647   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:29.376114   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:29.376148   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:30.515987   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.015616   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:30.687554   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.180446   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.480064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.480740   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.916592   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:31.930301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:31.930373   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:31.969552   70480 cri.go:89] found id: ""
	I0729 11:49:31.969580   70480 logs.go:276] 0 containers: []
	W0729 11:49:31.969588   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:31.969594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:31.969650   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:32.003765   70480 cri.go:89] found id: ""
	I0729 11:49:32.003795   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.003804   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:32.003811   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:32.003873   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:32.038447   70480 cri.go:89] found id: ""
	I0729 11:49:32.038475   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.038486   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:32.038492   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:32.038558   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:32.077766   70480 cri.go:89] found id: ""
	I0729 11:49:32.077793   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.077805   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:32.077813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:32.077866   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:32.112599   70480 cri.go:89] found id: ""
	I0729 11:49:32.112630   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.112640   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:32.112648   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:32.112711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:32.150385   70480 cri.go:89] found id: ""
	I0729 11:49:32.150410   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.150417   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:32.150423   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:32.150481   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:32.185148   70480 cri.go:89] found id: ""
	I0729 11:49:32.185172   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.185182   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:32.185189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:32.185251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:32.219652   70480 cri.go:89] found id: ""
	I0729 11:49:32.219685   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.219696   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:32.219706   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:32.219720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:32.233440   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:32.233468   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:32.300495   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:32.300523   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:32.300540   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:32.381361   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:32.381400   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:32.422678   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:32.422730   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:34.974183   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:34.987832   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:34.987912   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:35.028216   70480 cri.go:89] found id: ""
	I0729 11:49:35.028251   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.028262   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:35.028269   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:35.028333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:35.067585   70480 cri.go:89] found id: ""
	I0729 11:49:35.067616   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.067626   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:35.067634   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:35.067698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:35.103319   70480 cri.go:89] found id: ""
	I0729 11:49:35.103346   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.103355   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:35.103362   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:35.103426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:35.138637   70480 cri.go:89] found id: ""
	I0729 11:49:35.138673   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.138714   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:35.138726   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:35.138804   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:35.176249   70480 cri.go:89] found id: ""
	I0729 11:49:35.176285   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.176293   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:35.176298   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:35.176358   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:35.218168   70480 cri.go:89] found id: ""
	I0729 11:49:35.218194   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.218202   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:35.218208   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:35.218265   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:35.515188   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.518451   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.180771   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.181078   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.979448   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:38.482849   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.253594   70480 cri.go:89] found id: ""
	I0729 11:49:35.253634   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.253641   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:35.253655   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:35.253716   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:35.289229   70480 cri.go:89] found id: ""
	I0729 11:49:35.289258   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.289269   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:35.289279   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:35.289294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:35.341152   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:35.341186   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:35.355884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:35.355925   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:35.426135   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:35.426160   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:35.426172   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:35.508387   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:35.508422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.047364   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:38.061026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:38.061088   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:38.096252   70480 cri.go:89] found id: ""
	I0729 11:49:38.096280   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.096290   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:38.096297   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:38.096365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:38.133007   70480 cri.go:89] found id: ""
	I0729 11:49:38.133040   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.133051   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:38.133058   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:38.133130   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:38.168040   70480 cri.go:89] found id: ""
	I0729 11:49:38.168063   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.168073   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:38.168086   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:38.168160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:38.205440   70480 cri.go:89] found id: ""
	I0729 11:49:38.205464   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.205471   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:38.205476   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:38.205524   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:38.243345   70480 cri.go:89] found id: ""
	I0729 11:49:38.243373   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.243383   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:38.243390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:38.243449   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:38.278506   70480 cri.go:89] found id: ""
	I0729 11:49:38.278537   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.278549   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:38.278557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:38.278616   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:38.317917   70480 cri.go:89] found id: ""
	I0729 11:49:38.317951   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.317962   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:38.317970   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:38.318032   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:38.353198   70480 cri.go:89] found id: ""
	I0729 11:49:38.353228   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.353236   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:38.353245   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:38.353259   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:38.367239   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:38.367268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:38.445964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:38.445989   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:38.446002   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:38.528232   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:38.528268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.565958   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:38.565992   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:40.014625   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:42.015244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:39.682072   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.682635   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:44.180224   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:40.979943   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:43.481875   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.123139   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:41.137233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:41.137299   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:41.179468   70480 cri.go:89] found id: ""
	I0729 11:49:41.179492   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.179502   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:41.179508   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:41.179559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:41.215586   70480 cri.go:89] found id: ""
	I0729 11:49:41.215612   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.215620   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:41.215625   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:41.215682   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:41.259473   70480 cri.go:89] found id: ""
	I0729 11:49:41.259495   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.259503   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:41.259508   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:41.259562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:41.302914   70480 cri.go:89] found id: ""
	I0729 11:49:41.302939   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.302947   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:41.302953   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:41.303012   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:41.339826   70480 cri.go:89] found id: ""
	I0729 11:49:41.339857   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.339868   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:41.339876   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:41.339944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:41.376044   70480 cri.go:89] found id: ""
	I0729 11:49:41.376067   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.376074   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:41.376080   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:41.376126   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:41.412216   70480 cri.go:89] found id: ""
	I0729 11:49:41.412241   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.412249   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:41.412255   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:41.412311   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:41.448264   70480 cri.go:89] found id: ""
	I0729 11:49:41.448294   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.448305   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:41.448315   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:41.448331   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:41.499936   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:41.499974   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:41.517126   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:41.517151   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:41.590153   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:41.590185   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:41.590201   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:41.670830   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:41.670866   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.212782   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:44.226750   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:44.226815   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:44.261492   70480 cri.go:89] found id: ""
	I0729 11:49:44.261517   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.261524   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:44.261530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:44.261577   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:44.296391   70480 cri.go:89] found id: ""
	I0729 11:49:44.296426   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.296435   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:44.296444   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:44.296510   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:44.333335   70480 cri.go:89] found id: ""
	I0729 11:49:44.333365   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.333377   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:44.333384   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:44.333447   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:44.370607   70480 cri.go:89] found id: ""
	I0729 11:49:44.370639   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.370650   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:44.370657   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:44.370734   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:44.404229   70480 cri.go:89] found id: ""
	I0729 11:49:44.404257   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.404265   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:44.404271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:44.404332   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:44.439198   70480 cri.go:89] found id: ""
	I0729 11:49:44.439227   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.439238   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:44.439244   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:44.439302   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:44.473835   70480 cri.go:89] found id: ""
	I0729 11:49:44.473887   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.473899   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:44.473908   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:44.473971   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:44.518006   70480 cri.go:89] found id: ""
	I0729 11:49:44.518031   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.518040   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:44.518050   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:44.518065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.560188   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:44.560222   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:44.609565   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:44.609602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:44.624787   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:44.624826   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:44.707388   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:44.707410   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:44.707422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:44.515480   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.013967   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:46.181170   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:48.680460   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:45.482413   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.484420   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:49.982145   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.283951   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:47.297013   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:47.297080   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:47.331979   70480 cri.go:89] found id: ""
	I0729 11:49:47.332009   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.332018   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:47.332023   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:47.332071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:47.367888   70480 cri.go:89] found id: ""
	I0729 11:49:47.367914   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.367925   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:47.367931   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:47.367991   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:47.409364   70480 cri.go:89] found id: ""
	I0729 11:49:47.409392   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.409404   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:47.409410   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:47.409462   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:47.442554   70480 cri.go:89] found id: ""
	I0729 11:49:47.442583   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.442594   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:47.442602   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:47.442656   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:47.476662   70480 cri.go:89] found id: ""
	I0729 11:49:47.476692   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.476704   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:47.476713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:47.476775   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:47.513779   70480 cri.go:89] found id: ""
	I0729 11:49:47.513809   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.513819   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:47.513827   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:47.513885   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:47.550023   70480 cri.go:89] found id: ""
	I0729 11:49:47.550047   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.550053   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:47.550059   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:47.550120   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:47.586131   70480 cri.go:89] found id: ""
	I0729 11:49:47.586157   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.586165   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:47.586174   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:47.586187   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:47.671326   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:47.671365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:47.710573   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:47.710601   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:47.763248   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:47.763284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:47.779516   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:47.779545   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:47.857474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:49.014878   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:51.515152   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.515473   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.682492   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.179515   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:52.479384   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:54.980972   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.358275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:50.371438   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:50.371501   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:50.408776   70480 cri.go:89] found id: ""
	I0729 11:49:50.408803   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.408813   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:50.408820   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:50.408881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:50.443502   70480 cri.go:89] found id: ""
	I0729 11:49:50.443528   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.443536   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:50.443541   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:50.443600   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:50.480429   70480 cri.go:89] found id: ""
	I0729 11:49:50.480454   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.480463   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:50.480470   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:50.480525   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:50.518753   70480 cri.go:89] found id: ""
	I0729 11:49:50.518779   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.518789   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:50.518796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:50.518838   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:50.555970   70480 cri.go:89] found id: ""
	I0729 11:49:50.556000   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.556010   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:50.556022   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:50.556086   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:50.592342   70480 cri.go:89] found id: ""
	I0729 11:49:50.592374   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.592385   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:50.592392   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:50.592458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:50.628772   70480 cri.go:89] found id: ""
	I0729 11:49:50.628801   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.628813   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:50.628859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:50.628919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:50.677549   70480 cri.go:89] found id: ""
	I0729 11:49:50.677579   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.677588   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:50.677598   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:50.677612   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:50.734543   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:50.734579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:50.749418   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:50.749445   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:50.825728   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:50.825754   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:50.825773   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:50.901579   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:50.901615   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.439920   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:53.453322   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:53.453381   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:53.491594   70480 cri.go:89] found id: ""
	I0729 11:49:53.491622   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.491632   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:53.491638   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:53.491698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:53.527160   70480 cri.go:89] found id: ""
	I0729 11:49:53.527188   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.527201   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:53.527207   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:53.527264   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:53.563787   70480 cri.go:89] found id: ""
	I0729 11:49:53.563819   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.563830   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:53.563838   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:53.563899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:53.601551   70480 cri.go:89] found id: ""
	I0729 11:49:53.601575   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.601583   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:53.601589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:53.601634   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:53.636711   70480 cri.go:89] found id: ""
	I0729 11:49:53.636738   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.636748   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:53.636755   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:53.636824   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:53.677809   70480 cri.go:89] found id: ""
	I0729 11:49:53.677852   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.677864   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:53.677872   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:53.677932   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:53.714548   70480 cri.go:89] found id: ""
	I0729 11:49:53.714579   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.714590   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:53.714597   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:53.714663   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:53.751406   70480 cri.go:89] found id: ""
	I0729 11:49:53.751438   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.751448   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:53.751459   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:53.751474   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:53.834905   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:53.834942   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.880818   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:53.880852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:53.935913   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:53.935948   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:53.950053   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:53.950078   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:54.027378   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.014381   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:58.513958   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:55.180502   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.181274   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.182119   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.479530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.981806   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:56.528379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:56.541859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:56.541930   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:56.580566   70480 cri.go:89] found id: ""
	I0729 11:49:56.580612   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.580621   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:56.580629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:56.580687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:56.616390   70480 cri.go:89] found id: ""
	I0729 11:49:56.616419   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.616427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:56.616433   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:56.616483   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:56.653245   70480 cri.go:89] found id: ""
	I0729 11:49:56.653273   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.653281   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:56.653286   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:56.653345   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:56.693011   70480 cri.go:89] found id: ""
	I0729 11:49:56.693033   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.693041   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:56.693047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:56.693115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:56.729684   70480 cri.go:89] found id: ""
	I0729 11:49:56.729714   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.729723   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:56.729736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:56.729799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:56.764634   70480 cri.go:89] found id: ""
	I0729 11:49:56.764675   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.764684   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:56.764692   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:56.764753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:56.800672   70480 cri.go:89] found id: ""
	I0729 11:49:56.800703   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.800714   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:56.800721   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:56.800784   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:56.838727   70480 cri.go:89] found id: ""
	I0729 11:49:56.838758   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.838769   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:56.838781   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:56.838794   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:56.918017   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.918043   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:56.918057   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:57.011900   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:57.011951   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:57.055320   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:57.055350   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:57.113681   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:57.113725   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:59.629516   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:59.643794   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:59.643872   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:59.682543   70480 cri.go:89] found id: ""
	I0729 11:49:59.682571   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.682580   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:59.682586   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:59.682649   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:59.719313   70480 cri.go:89] found id: ""
	I0729 11:49:59.719341   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.719352   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:59.719360   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:59.719415   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:59.759567   70480 cri.go:89] found id: ""
	I0729 11:49:59.759593   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.759603   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:59.759611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:59.759668   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:59.796159   70480 cri.go:89] found id: ""
	I0729 11:49:59.796180   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.796187   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:59.796192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:59.796247   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:59.836178   70480 cri.go:89] found id: ""
	I0729 11:49:59.836199   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.836207   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:59.836212   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:59.836263   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:59.876751   70480 cri.go:89] found id: ""
	I0729 11:49:59.876783   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.876795   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:59.876802   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:59.876863   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:59.916163   70480 cri.go:89] found id: ""
	I0729 11:49:59.916196   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.916207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:59.916217   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:59.916281   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:59.953581   70480 cri.go:89] found id: ""
	I0729 11:49:59.953611   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.953621   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:59.953631   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:59.953649   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:00.009128   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:00.009167   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:00.024681   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:00.024710   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:00.098939   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:00.098966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:00.098980   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:00.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:00.186125   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:01.015333   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:03.017456   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:01.682621   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.180814   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.480490   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.481157   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.726382   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:02.740727   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:02.740799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:02.779624   70480 cri.go:89] found id: ""
	I0729 11:50:02.779653   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.779664   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:02.779672   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:02.779731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:02.816044   70480 cri.go:89] found id: ""
	I0729 11:50:02.816076   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.816087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:02.816094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:02.816168   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:02.859406   70480 cri.go:89] found id: ""
	I0729 11:50:02.859434   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.859445   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:02.859453   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:02.859514   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:02.897019   70480 cri.go:89] found id: ""
	I0729 11:50:02.897049   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.897058   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:02.897064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:02.897123   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:02.936810   70480 cri.go:89] found id: ""
	I0729 11:50:02.936843   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.936854   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:02.936860   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:02.936919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:02.973391   70480 cri.go:89] found id: ""
	I0729 11:50:02.973412   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.973420   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:02.973426   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:02.973485   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:03.010867   70480 cri.go:89] found id: ""
	I0729 11:50:03.010961   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.010983   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:03.011001   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:03.011082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:03.048287   70480 cri.go:89] found id: ""
	I0729 11:50:03.048321   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.048332   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:03.048343   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:03.048360   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:03.102752   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:03.102790   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:03.117732   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:03.117759   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:03.193620   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:03.193643   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:03.193655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:03.277205   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:03.277250   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:05.513602   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:07.514141   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.181449   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:08.682052   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.980021   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:09.479308   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:05.833546   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:05.849129   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:05.849191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:05.893059   70480 cri.go:89] found id: ""
	I0729 11:50:05.893094   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.893105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:05.893113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:05.893182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:05.927844   70480 cri.go:89] found id: ""
	I0729 11:50:05.927879   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.927889   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:05.927896   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:05.927960   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:05.964427   70480 cri.go:89] found id: ""
	I0729 11:50:05.964451   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.964458   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:05.964464   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:05.964509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:06.001963   70480 cri.go:89] found id: ""
	I0729 11:50:06.001995   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.002002   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:06.002008   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:06.002055   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:06.038838   70480 cri.go:89] found id: ""
	I0729 11:50:06.038869   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.038880   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:06.038888   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:06.038948   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:06.073952   70480 cri.go:89] found id: ""
	I0729 11:50:06.073985   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.073995   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:06.074003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:06.074063   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:06.110495   70480 cri.go:89] found id: ""
	I0729 11:50:06.110524   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.110535   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:06.110541   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:06.110603   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:06.146851   70480 cri.go:89] found id: ""
	I0729 11:50:06.146893   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.146904   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:06.146915   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:06.146931   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:06.201779   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:06.201814   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:06.216407   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:06.216434   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:06.294362   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:06.294382   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:06.294394   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:06.374381   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:06.374415   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:08.920326   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:08.933875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:08.933940   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:08.974589   70480 cri.go:89] found id: ""
	I0729 11:50:08.974615   70480 logs.go:276] 0 containers: []
	W0729 11:50:08.974623   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:08.974629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:08.974691   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:09.013032   70480 cri.go:89] found id: ""
	I0729 11:50:09.013056   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.013066   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:09.013075   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:09.013121   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:09.052365   70480 cri.go:89] found id: ""
	I0729 11:50:09.052390   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.052397   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:09.052402   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:09.052450   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:09.098671   70480 cri.go:89] found id: ""
	I0729 11:50:09.098719   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.098731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:09.098739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:09.098799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:09.148033   70480 cri.go:89] found id: ""
	I0729 11:50:09.148062   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.148083   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:09.148091   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:09.148151   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:09.195140   70480 cri.go:89] found id: ""
	I0729 11:50:09.195172   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.195179   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:09.195185   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:09.195244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:09.242315   70480 cri.go:89] found id: ""
	I0729 11:50:09.242346   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.242356   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:09.242364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:09.242428   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:09.278315   70480 cri.go:89] found id: ""
	I0729 11:50:09.278342   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.278353   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:09.278364   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:09.278377   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:09.327622   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:09.327654   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:09.342383   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:09.342416   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:09.420797   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:09.420821   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:09.420832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:09.499308   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:09.499345   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:09.514809   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.515103   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.515311   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.181981   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.681128   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.480200   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.480991   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:12.042649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:12.057927   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:12.057996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:12.096129   70480 cri.go:89] found id: ""
	I0729 11:50:12.096159   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.096170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:12.096177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:12.096244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:12.133849   70480 cri.go:89] found id: ""
	I0729 11:50:12.133880   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.133891   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:12.133898   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:12.133963   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:12.171706   70480 cri.go:89] found id: ""
	I0729 11:50:12.171730   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.171738   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:12.171744   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:12.171810   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:12.211248   70480 cri.go:89] found id: ""
	I0729 11:50:12.211285   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.211307   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:12.211315   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:12.211379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:12.247472   70480 cri.go:89] found id: ""
	I0729 11:50:12.247500   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.247510   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:12.247517   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:12.247578   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:12.283818   70480 cri.go:89] found id: ""
	I0729 11:50:12.283847   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.283859   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:12.283866   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:12.283937   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:12.324453   70480 cri.go:89] found id: ""
	I0729 11:50:12.324478   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.324485   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:12.324490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:12.324541   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:12.362504   70480 cri.go:89] found id: ""
	I0729 11:50:12.362531   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.362538   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:12.362546   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:12.362558   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:12.439250   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:12.439278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:12.439295   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:12.521240   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:12.521271   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:12.561881   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:12.561918   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:12.615509   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:12.615548   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:15.130823   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:15.151388   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:15.151457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:15.209596   70480 cri.go:89] found id: ""
	I0729 11:50:15.209645   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.209658   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:15.209668   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:15.209736   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:15.515486   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:18.014350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.681466   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.686021   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.979592   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.980955   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.248351   70480 cri.go:89] found id: ""
	I0729 11:50:15.248383   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.248394   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:15.248402   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:15.248459   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:15.287261   70480 cri.go:89] found id: ""
	I0729 11:50:15.287288   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.287296   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:15.287301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:15.287356   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:15.325197   70480 cri.go:89] found id: ""
	I0729 11:50:15.325221   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.325229   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:15.325234   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:15.325292   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:15.361903   70480 cri.go:89] found id: ""
	I0729 11:50:15.361930   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.361939   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:15.361944   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:15.361994   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:15.399963   70480 cri.go:89] found id: ""
	I0729 11:50:15.399996   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.400007   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:15.400015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:15.400079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:15.437366   70480 cri.go:89] found id: ""
	I0729 11:50:15.437400   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.437409   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:15.437414   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:15.437476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:15.473784   70480 cri.go:89] found id: ""
	I0729 11:50:15.473810   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.473827   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:15.473837   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:15.473852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:15.550294   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:15.550328   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:15.550343   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:15.636252   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:15.636297   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:15.681361   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:15.681397   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:15.735415   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:15.735451   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.250767   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:18.265247   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:18.265319   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:18.302793   70480 cri.go:89] found id: ""
	I0729 11:50:18.302819   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.302827   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:18.302833   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:18.302894   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:18.345507   70480 cri.go:89] found id: ""
	I0729 11:50:18.345541   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.345551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:18.345558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:18.345621   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:18.381646   70480 cri.go:89] found id: ""
	I0729 11:50:18.381675   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.381682   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:18.381688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:18.381750   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:18.417233   70480 cri.go:89] found id: ""
	I0729 11:50:18.417261   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.417268   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:18.417275   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:18.417340   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:18.453506   70480 cri.go:89] found id: ""
	I0729 11:50:18.453534   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.453541   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:18.453547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:18.453598   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:18.491886   70480 cri.go:89] found id: ""
	I0729 11:50:18.491910   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.491918   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:18.491923   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:18.491980   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:18.528413   70480 cri.go:89] found id: ""
	I0729 11:50:18.528444   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.528454   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:18.528462   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:18.528518   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:18.565621   70480 cri.go:89] found id: ""
	I0729 11:50:18.565653   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.565663   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:18.565673   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:18.565690   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:18.616796   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:18.616832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.631175   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:18.631202   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:18.712480   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:18.712506   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:18.712520   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:18.797246   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:18.797284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:20.514492   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:23.016174   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.181252   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.682450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.480316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.980474   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:21.344260   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:21.358689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:21.358781   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:21.393248   70480 cri.go:89] found id: ""
	I0729 11:50:21.393276   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.393286   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:21.393293   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:21.393352   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:21.426960   70480 cri.go:89] found id: ""
	I0729 11:50:21.426989   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.426999   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:21.427007   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:21.427066   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:21.463521   70480 cri.go:89] found id: ""
	I0729 11:50:21.463545   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.463553   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:21.463559   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:21.463612   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:21.501915   70480 cri.go:89] found id: ""
	I0729 11:50:21.501950   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.501960   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:21.501966   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:21.502023   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:21.538208   70480 cri.go:89] found id: ""
	I0729 11:50:21.538247   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.538258   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:21.538265   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:21.538327   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:21.572572   70480 cri.go:89] found id: ""
	I0729 11:50:21.572594   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.572602   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:21.572607   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:21.572664   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:21.608002   70480 cri.go:89] found id: ""
	I0729 11:50:21.608037   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.608046   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:21.608053   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:21.608103   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:21.643039   70480 cri.go:89] found id: ""
	I0729 11:50:21.643069   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.643080   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:21.643098   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:21.643115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:21.722921   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:21.722960   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:21.768597   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:21.768628   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:21.826974   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:21.827009   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:21.842214   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:21.842242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:21.913217   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:24.413629   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:24.428364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:24.428458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:24.469631   70480 cri.go:89] found id: ""
	I0729 11:50:24.469654   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.469661   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:24.469667   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:24.469712   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:24.508201   70480 cri.go:89] found id: ""
	I0729 11:50:24.508231   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.508242   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:24.508254   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:24.508317   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:24.543992   70480 cri.go:89] found id: ""
	I0729 11:50:24.544020   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.544028   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:24.544034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:24.544082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:24.579956   70480 cri.go:89] found id: ""
	I0729 11:50:24.579983   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.579990   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:24.579995   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:24.580051   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:24.616233   70480 cri.go:89] found id: ""
	I0729 11:50:24.616259   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.616267   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:24.616273   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:24.616339   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:24.655131   70480 cri.go:89] found id: ""
	I0729 11:50:24.655159   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.655167   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:24.655173   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:24.655223   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:24.701706   70480 cri.go:89] found id: ""
	I0729 11:50:24.701730   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.701738   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:24.701743   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:24.701799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:24.739729   70480 cri.go:89] found id: ""
	I0729 11:50:24.739754   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.739762   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:24.739773   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:24.739785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:24.817347   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:24.817390   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:24.858248   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:24.858274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:24.911486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:24.911527   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:24.927180   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:24.927209   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:25.007474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:25.515125   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.515919   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:24.682503   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.180867   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:29.181299   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:25.478971   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.979128   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.507887   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:27.521936   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:27.522004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:27.555832   70480 cri.go:89] found id: ""
	I0729 11:50:27.555865   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.555875   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:27.555882   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:27.555944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:27.590482   70480 cri.go:89] found id: ""
	I0729 11:50:27.590509   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.590518   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:27.590526   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:27.590587   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:27.626998   70480 cri.go:89] found id: ""
	I0729 11:50:27.627028   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.627038   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:27.627045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:27.627105   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:27.661980   70480 cri.go:89] found id: ""
	I0729 11:50:27.662015   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.662027   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:27.662034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:27.662096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:27.696646   70480 cri.go:89] found id: ""
	I0729 11:50:27.696675   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.696683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:27.696689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:27.696735   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:27.739393   70480 cri.go:89] found id: ""
	I0729 11:50:27.739421   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.739432   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:27.739439   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:27.739500   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:27.774985   70480 cri.go:89] found id: ""
	I0729 11:50:27.775013   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.775027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:27.775034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:27.775097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:27.811526   70480 cri.go:89] found id: ""
	I0729 11:50:27.811555   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.811567   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:27.811578   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:27.811594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:27.866445   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:27.866482   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:27.881961   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:27.881991   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:27.949524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:27.949543   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:27.949555   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:28.029386   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:28.029418   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:30.014858   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.515721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:31.183830   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:33.681416   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.479786   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.484195   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:34.978772   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.571850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:30.586163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:30.586241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:30.622065   70480 cri.go:89] found id: ""
	I0729 11:50:30.622096   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.622105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:30.622113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:30.622189   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:30.659346   70480 cri.go:89] found id: ""
	I0729 11:50:30.659386   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.659398   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:30.659405   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:30.659467   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:30.699376   70480 cri.go:89] found id: ""
	I0729 11:50:30.699403   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.699413   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:30.699421   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:30.699490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:30.734843   70480 cri.go:89] found id: ""
	I0729 11:50:30.734873   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.734892   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:30.734900   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:30.734974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:30.772983   70480 cri.go:89] found id: ""
	I0729 11:50:30.773010   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.773021   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:30.773028   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:30.773084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:30.814777   70480 cri.go:89] found id: ""
	I0729 11:50:30.814805   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.814815   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:30.814823   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:30.814891   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:30.851989   70480 cri.go:89] found id: ""
	I0729 11:50:30.852018   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.852027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:30.852036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:30.852094   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:30.891692   70480 cri.go:89] found id: ""
	I0729 11:50:30.891716   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.891732   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:30.891743   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:30.891758   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:30.943466   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:30.943498   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:30.957182   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:30.957208   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:31.029695   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:31.029717   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:31.029731   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:31.113329   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:31.113378   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:33.654275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:33.668509   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:33.668581   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:33.706103   70480 cri.go:89] found id: ""
	I0729 11:50:33.706133   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.706144   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:33.706151   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:33.706203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:33.743389   70480 cri.go:89] found id: ""
	I0729 11:50:33.743417   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.743424   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:33.743431   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:33.743482   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:33.781980   70480 cri.go:89] found id: ""
	I0729 11:50:33.782014   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.782025   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:33.782032   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:33.782092   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:33.818049   70480 cri.go:89] found id: ""
	I0729 11:50:33.818080   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.818090   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:33.818098   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:33.818164   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:33.854043   70480 cri.go:89] found id: ""
	I0729 11:50:33.854069   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.854077   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:33.854083   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:33.854144   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:33.891292   70480 cri.go:89] found id: ""
	I0729 11:50:33.891319   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.891329   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:33.891338   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:33.891400   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:33.927871   70480 cri.go:89] found id: ""
	I0729 11:50:33.927904   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.927915   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:33.927922   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:33.927979   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:33.964136   70480 cri.go:89] found id: ""
	I0729 11:50:33.964163   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.964170   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:33.964181   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:33.964195   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:34.015262   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:34.015292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:34.029677   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:34.029711   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:34.105907   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:34.105932   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:34.105945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:34.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:34.186120   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:35.014404   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:37.015435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:35.681610   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:38.181485   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.979912   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:39.480001   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.740552   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:36.754472   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:36.754533   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:36.793126   70480 cri.go:89] found id: ""
	I0729 11:50:36.793162   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.793170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:36.793178   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:36.793235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:36.831095   70480 cri.go:89] found id: ""
	I0729 11:50:36.831152   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.831166   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:36.831176   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:36.831235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:36.869236   70480 cri.go:89] found id: ""
	I0729 11:50:36.869266   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.869277   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:36.869284   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:36.869343   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:36.907163   70480 cri.go:89] found id: ""
	I0729 11:50:36.907195   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.907203   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:36.907209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:36.907267   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:36.943074   70480 cri.go:89] found id: ""
	I0729 11:50:36.943101   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.943110   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:36.943115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:36.943177   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:36.980411   70480 cri.go:89] found id: ""
	I0729 11:50:36.980432   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.980442   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:36.980449   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:36.980509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:37.017994   70480 cri.go:89] found id: ""
	I0729 11:50:37.018015   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.018028   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:37.018035   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:37.018091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:37.052761   70480 cri.go:89] found id: ""
	I0729 11:50:37.052787   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.052797   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:37.052806   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:37.052818   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:37.105925   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:37.105970   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:37.119829   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:37.119862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:37.198953   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:37.198992   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:37.199013   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:37.276947   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:37.276987   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:39.816196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:39.830387   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:39.830460   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:39.864879   70480 cri.go:89] found id: ""
	I0729 11:50:39.864914   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.864921   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:39.864927   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:39.864974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:39.902730   70480 cri.go:89] found id: ""
	I0729 11:50:39.902761   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.902772   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:39.902779   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:39.902832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:39.937630   70480 cri.go:89] found id: ""
	I0729 11:50:39.937656   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.937663   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:39.937669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:39.937718   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:39.972692   70480 cri.go:89] found id: ""
	I0729 11:50:39.972723   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.972731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:39.972736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:39.972798   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:40.016152   70480 cri.go:89] found id: ""
	I0729 11:50:40.016179   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.016187   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:40.016192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:40.016239   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:40.051206   70480 cri.go:89] found id: ""
	I0729 11:50:40.051233   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.051243   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:40.051249   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:40.051310   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:40.091007   70480 cri.go:89] found id: ""
	I0729 11:50:40.091039   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.091050   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:40.091057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:40.091122   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:40.130939   70480 cri.go:89] found id: ""
	I0729 11:50:40.130968   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.130979   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:40.130992   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:40.131011   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:40.210551   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:40.210579   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:40.210594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:39.514683   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.515289   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.515935   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.681167   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:42.683536   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.978995   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.979276   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.292853   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:40.292889   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:40.333337   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:40.333365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:40.383219   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:40.383254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:42.898275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:42.913287   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:42.913365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:42.948662   70480 cri.go:89] found id: ""
	I0729 11:50:42.948696   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.948709   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:42.948716   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:42.948768   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:42.989511   70480 cri.go:89] found id: ""
	I0729 11:50:42.989541   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.989551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:42.989558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:42.989609   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:43.025987   70480 cri.go:89] found id: ""
	I0729 11:50:43.026013   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.026021   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:43.026026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:43.026082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:43.062210   70480 cri.go:89] found id: ""
	I0729 11:50:43.062243   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.062253   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:43.062271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:43.062344   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:43.104967   70480 cri.go:89] found id: ""
	I0729 11:50:43.104990   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.104997   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:43.105003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:43.105081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:43.146433   70480 cri.go:89] found id: ""
	I0729 11:50:43.146467   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.146479   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:43.146487   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:43.146551   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:43.199618   70480 cri.go:89] found id: ""
	I0729 11:50:43.199647   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.199658   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:43.199665   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:43.199721   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:43.238994   70480 cri.go:89] found id: ""
	I0729 11:50:43.239025   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.239036   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:43.239053   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:43.239071   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:43.253185   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:43.253211   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:43.325381   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:43.325399   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:43.325410   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:43.408547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:43.408582   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:43.447251   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:43.447281   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:45.516120   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.015236   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:45.181461   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:47.682648   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.478782   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.479013   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.001731   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:46.017006   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:46.017084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:46.056451   70480 cri.go:89] found id: ""
	I0729 11:50:46.056480   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.056492   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:46.056500   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:46.056562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:46.094715   70480 cri.go:89] found id: ""
	I0729 11:50:46.094754   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.094762   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:46.094767   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:46.094817   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:46.131440   70480 cri.go:89] found id: ""
	I0729 11:50:46.131471   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.131483   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:46.131490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:46.131548   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:46.171239   70480 cri.go:89] found id: ""
	I0729 11:50:46.171264   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.171271   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:46.171278   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:46.171331   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:46.208060   70480 cri.go:89] found id: ""
	I0729 11:50:46.208094   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.208102   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:46.208108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:46.208162   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:46.244765   70480 cri.go:89] found id: ""
	I0729 11:50:46.244797   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.244806   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:46.244813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:46.244874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:46.281932   70480 cri.go:89] found id: ""
	I0729 11:50:46.281965   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.281977   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:46.281986   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:46.282058   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:46.319589   70480 cri.go:89] found id: ""
	I0729 11:50:46.319618   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.319629   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:46.319640   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:46.319655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:46.369821   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:46.369859   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:46.384828   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:46.384862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:46.460755   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:46.460780   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:46.460793   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:46.543424   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:46.543459   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.089661   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:49.103781   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:49.103878   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:49.140206   70480 cri.go:89] found id: ""
	I0729 11:50:49.140234   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.140242   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:49.140248   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:49.140306   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:49.181047   70480 cri.go:89] found id: ""
	I0729 11:50:49.181077   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.181087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:49.181094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:49.181160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:49.217106   70480 cri.go:89] found id: ""
	I0729 11:50:49.217135   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.217145   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:49.217152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:49.217213   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:49.256920   70480 cri.go:89] found id: ""
	I0729 11:50:49.256955   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.256966   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:49.256973   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:49.257040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:49.294586   70480 cri.go:89] found id: ""
	I0729 11:50:49.294610   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.294618   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:49.294623   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:49.294687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:49.332441   70480 cri.go:89] found id: ""
	I0729 11:50:49.332467   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.332475   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:49.332480   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:49.332538   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:49.370255   70480 cri.go:89] found id: ""
	I0729 11:50:49.370281   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.370288   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:49.370293   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:49.370348   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:49.407106   70480 cri.go:89] found id: ""
	I0729 11:50:49.407142   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.407150   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:49.407158   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:49.407170   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:49.487262   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:49.487294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.530381   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:49.530407   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:49.585601   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:49.585640   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:49.608868   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:49.608909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:49.717369   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:50.513962   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.514789   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.181505   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.681593   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.483654   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.978973   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:54.979504   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.218194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:52.232762   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:52.232830   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:52.269399   70480 cri.go:89] found id: ""
	I0729 11:50:52.269427   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.269435   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:52.269441   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:52.269488   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:52.304375   70480 cri.go:89] found id: ""
	I0729 11:50:52.304405   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.304415   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:52.304421   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:52.304471   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:52.339374   70480 cri.go:89] found id: ""
	I0729 11:50:52.339406   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.339423   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:52.339431   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:52.339490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:52.375675   70480 cri.go:89] found id: ""
	I0729 11:50:52.375704   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.375715   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:52.375724   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:52.375785   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:52.411587   70480 cri.go:89] found id: ""
	I0729 11:50:52.411612   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.411620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:52.411625   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:52.411677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:52.445267   70480 cri.go:89] found id: ""
	I0729 11:50:52.445291   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.445301   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:52.445308   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:52.445367   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:52.491320   70480 cri.go:89] found id: ""
	I0729 11:50:52.491352   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.491361   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:52.491376   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:52.491432   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:52.528158   70480 cri.go:89] found id: ""
	I0729 11:50:52.528195   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.528205   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:52.528214   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:52.528229   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:52.584122   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:52.584156   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:52.598572   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:52.598611   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:52.675433   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:52.675451   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:52.675473   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:52.759393   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:52.759433   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:55.014201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.015293   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.181456   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.680557   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:56.980460   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:58.982179   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.300231   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:55.316902   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:55.316972   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:55.362311   70480 cri.go:89] found id: ""
	I0729 11:50:55.362350   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.362360   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:55.362368   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:55.362434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:55.420481   70480 cri.go:89] found id: ""
	I0729 11:50:55.420506   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.420519   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:55.420524   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:55.420582   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:55.472515   70480 cri.go:89] found id: ""
	I0729 11:50:55.472546   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.472556   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:55.472565   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:55.472625   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:55.513203   70480 cri.go:89] found id: ""
	I0729 11:50:55.513224   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.513232   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:55.513237   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:55.513290   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:55.548410   70480 cri.go:89] found id: ""
	I0729 11:50:55.548440   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.548450   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:55.548457   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:55.548517   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:55.584532   70480 cri.go:89] found id: ""
	I0729 11:50:55.584561   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.584571   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:55.584577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:55.584640   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:55.627593   70480 cri.go:89] found id: ""
	I0729 11:50:55.627623   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.627652   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:55.627660   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:55.627723   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:55.667981   70480 cri.go:89] found id: ""
	I0729 11:50:55.668005   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.668014   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:55.668021   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:55.668050   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:55.721569   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:55.721605   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:55.735570   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:55.735598   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:55.810549   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:55.810578   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:55.810590   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:55.892547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:55.892594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:58.435946   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:58.449628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:58.449693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:58.487449   70480 cri.go:89] found id: ""
	I0729 11:50:58.487481   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.487499   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:58.487507   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:58.487574   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:58.524030   70480 cri.go:89] found id: ""
	I0729 11:50:58.524051   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.524058   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:58.524063   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:58.524118   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:58.560348   70480 cri.go:89] found id: ""
	I0729 11:50:58.560374   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.560381   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:58.560386   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:58.560434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:58.598947   70480 cri.go:89] found id: ""
	I0729 11:50:58.598974   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.598984   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:58.598992   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:58.599050   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:58.634763   70480 cri.go:89] found id: ""
	I0729 11:50:58.634789   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.634799   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:58.634807   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:58.634867   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:58.671609   70480 cri.go:89] found id: ""
	I0729 11:50:58.671639   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.671649   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:58.671656   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:58.671715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:58.712629   70480 cri.go:89] found id: ""
	I0729 11:50:58.712654   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.712661   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:58.712669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:58.712719   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:58.749749   70480 cri.go:89] found id: ""
	I0729 11:50:58.749779   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.749788   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:58.749799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:58.749813   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:58.807124   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:58.807159   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:58.821486   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:58.821513   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:58.889226   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:58.889248   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:58.889263   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:58.968593   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:58.968633   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:59.515675   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.015006   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:59.681443   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.181409   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:04.183067   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.482470   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:03.482794   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.511112   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:01.525124   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:01.525221   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:01.559415   70480 cri.go:89] found id: ""
	I0729 11:51:01.559450   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.559462   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:01.559469   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:01.559530   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:01.593614   70480 cri.go:89] found id: ""
	I0729 11:51:01.593644   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.593655   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:01.593661   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:01.593722   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:01.632365   70480 cri.go:89] found id: ""
	I0729 11:51:01.632398   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.632409   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:01.632416   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:01.632476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:01.670518   70480 cri.go:89] found id: ""
	I0729 11:51:01.670543   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.670550   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:01.670557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:01.670618   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:01.709732   70480 cri.go:89] found id: ""
	I0729 11:51:01.709755   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.709762   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:01.709768   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:01.709813   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:01.751710   70480 cri.go:89] found id: ""
	I0729 11:51:01.751739   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.751749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:01.751756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:01.751818   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:01.795819   70480 cri.go:89] found id: ""
	I0729 11:51:01.795848   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.795859   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:01.795867   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:01.795931   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:01.838654   70480 cri.go:89] found id: ""
	I0729 11:51:01.838684   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.838691   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:01.838719   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:01.838734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:01.894328   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:01.894370   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:01.908870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:01.908894   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:01.982740   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:01.982770   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:01.982785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:02.068332   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:02.068376   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.614758   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:04.629646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:04.629708   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:04.666502   70480 cri.go:89] found id: ""
	I0729 11:51:04.666526   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.666534   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:04.666540   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:04.666590   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:04.705373   70480 cri.go:89] found id: ""
	I0729 11:51:04.705398   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.705407   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:04.705414   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:04.705468   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:04.745076   70480 cri.go:89] found id: ""
	I0729 11:51:04.745110   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.745122   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:04.745130   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:04.745195   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:04.786923   70480 cri.go:89] found id: ""
	I0729 11:51:04.786953   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.786963   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:04.786971   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:04.787031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:04.824483   70480 cri.go:89] found id: ""
	I0729 11:51:04.824514   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.824522   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:04.824530   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:04.824591   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:04.861347   70480 cri.go:89] found id: ""
	I0729 11:51:04.861379   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.861390   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:04.861399   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:04.861456   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:04.902102   70480 cri.go:89] found id: ""
	I0729 11:51:04.902136   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.902143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:04.902149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:04.902206   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:04.942533   70480 cri.go:89] found id: ""
	I0729 11:51:04.942560   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.942568   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:04.942576   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:04.942589   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:04.997272   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:04.997309   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:05.012276   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:05.012307   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:05.088408   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:05.088434   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:05.088462   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:05.173414   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:05.173452   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.514092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.016150   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:06.680804   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:08.681656   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:05.978846   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.979974   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.718649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:07.732209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:07.732282   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:07.767128   70480 cri.go:89] found id: ""
	I0729 11:51:07.767157   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.767164   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:07.767170   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:07.767233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:07.803210   70480 cri.go:89] found id: ""
	I0729 11:51:07.803243   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.803253   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:07.803260   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:07.803323   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:07.839694   70480 cri.go:89] found id: ""
	I0729 11:51:07.839718   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.839726   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:07.839732   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:07.839779   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:07.876660   70480 cri.go:89] found id: ""
	I0729 11:51:07.876687   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.876695   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:07.876701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:07.876758   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:07.923089   70480 cri.go:89] found id: ""
	I0729 11:51:07.923119   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.923128   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:07.923139   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:07.923191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:07.961178   70480 cri.go:89] found id: ""
	I0729 11:51:07.961205   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.961214   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:07.961223   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:07.961283   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:08.001007   70480 cri.go:89] found id: ""
	I0729 11:51:08.001031   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.001038   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:08.001047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:08.001115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:08.036930   70480 cri.go:89] found id: ""
	I0729 11:51:08.036956   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.036964   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:08.036972   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:08.036982   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:08.091405   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:08.091440   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:08.106456   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:08.106483   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:08.181814   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:08.181835   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:08.181846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:08.267663   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:08.267701   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:09.514482   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.514970   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.182959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:13.680925   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.481614   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:12.482016   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:14.980848   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.814602   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:10.828290   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:10.828351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:10.866374   70480 cri.go:89] found id: ""
	I0729 11:51:10.866398   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.866406   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:10.866412   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:10.866457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:10.908253   70480 cri.go:89] found id: ""
	I0729 11:51:10.908286   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.908295   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:10.908301   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:10.908370   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:10.952686   70480 cri.go:89] found id: ""
	I0729 11:51:10.952709   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.952717   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:10.952723   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:10.952771   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:10.991621   70480 cri.go:89] found id: ""
	I0729 11:51:10.991650   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.991661   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:10.991668   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:10.991728   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:11.028420   70480 cri.go:89] found id: ""
	I0729 11:51:11.028451   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.028462   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:11.028469   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:11.028520   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:11.065213   70480 cri.go:89] found id: ""
	I0729 11:51:11.065248   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.065259   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:11.065266   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:11.065328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:11.104008   70480 cri.go:89] found id: ""
	I0729 11:51:11.104051   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.104064   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:11.104073   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:11.104134   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:11.140872   70480 cri.go:89] found id: ""
	I0729 11:51:11.140913   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.140925   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:11.140936   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:11.140958   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:11.222498   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:11.222535   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:11.265869   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:11.265909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:11.319889   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:11.319926   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:11.334069   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:11.334100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:11.412461   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:13.913194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:13.927057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:13.927141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:13.963267   70480 cri.go:89] found id: ""
	I0729 11:51:13.963302   70480 logs.go:276] 0 containers: []
	W0729 11:51:13.963312   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:13.963321   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:13.963386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:14.002591   70480 cri.go:89] found id: ""
	I0729 11:51:14.002621   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.002633   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:14.002640   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:14.002737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:14.041377   70480 cri.go:89] found id: ""
	I0729 11:51:14.041410   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.041422   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:14.041437   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:14.041502   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:14.081794   70480 cri.go:89] found id: ""
	I0729 11:51:14.081821   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.081829   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:14.081835   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:14.081888   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:14.121223   70480 cri.go:89] found id: ""
	I0729 11:51:14.121251   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.121261   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:14.121269   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:14.121333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:14.156762   70480 cri.go:89] found id: ""
	I0729 11:51:14.156798   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.156808   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:14.156817   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:14.156881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:14.195068   70480 cri.go:89] found id: ""
	I0729 11:51:14.195098   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.195108   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:14.195115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:14.195185   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:14.233361   70480 cri.go:89] found id: ""
	I0729 11:51:14.233392   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.233402   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:14.233413   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:14.233428   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:14.289276   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:14.289318   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:14.304505   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:14.304536   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:14.376648   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:14.376673   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:14.376685   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:14.453538   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:14.453574   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:14.016205   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:16.514374   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.514902   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:15.681382   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.181597   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.479865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:19.480304   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.004150   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:17.018324   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:17.018401   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:17.056923   70480 cri.go:89] found id: ""
	I0729 11:51:17.056959   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.056970   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:17.056978   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:17.057042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:17.093329   70480 cri.go:89] found id: ""
	I0729 11:51:17.093362   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.093374   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:17.093381   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:17.093443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:17.130340   70480 cri.go:89] found id: ""
	I0729 11:51:17.130372   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.130382   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:17.130391   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:17.130457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:17.164877   70480 cri.go:89] found id: ""
	I0729 11:51:17.164902   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.164910   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:17.164915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:17.164962   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:17.201509   70480 cri.go:89] found id: ""
	I0729 11:51:17.201538   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.201549   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:17.201555   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:17.201629   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:17.242097   70480 cri.go:89] found id: ""
	I0729 11:51:17.242121   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.242130   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:17.242136   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:17.242182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:17.277239   70480 cri.go:89] found id: ""
	I0729 11:51:17.277262   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.277270   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:17.277279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:17.277328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:17.310991   70480 cri.go:89] found id: ""
	I0729 11:51:17.311022   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.311034   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:17.311046   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:17.311061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:17.386672   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:17.386718   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:17.424880   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:17.424905   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:17.477226   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:17.477255   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:17.492327   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:17.492354   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:17.570971   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.071148   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:20.085734   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:20.085821   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:20.120455   70480 cri.go:89] found id: ""
	I0729 11:51:20.120500   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.120512   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:20.120530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:20.120589   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:20.165159   70480 cri.go:89] found id: ""
	I0729 11:51:20.165186   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.165193   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:20.165199   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:20.165245   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:20.202147   70480 cri.go:89] found id: ""
	I0729 11:51:20.202168   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.202175   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:20.202182   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:20.202237   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:20.515560   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.014288   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.681542   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.181158   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:21.978106   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.979809   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.237783   70480 cri.go:89] found id: ""
	I0729 11:51:20.237811   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.237822   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:20.237829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:20.237890   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:20.280812   70480 cri.go:89] found id: ""
	I0729 11:51:20.280839   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.280852   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:20.280858   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:20.280922   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:20.317904   70480 cri.go:89] found id: ""
	I0729 11:51:20.317925   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.317932   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:20.317938   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:20.317986   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:20.356106   70480 cri.go:89] found id: ""
	I0729 11:51:20.356136   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.356143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:20.356149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:20.356197   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:20.393483   70480 cri.go:89] found id: ""
	I0729 11:51:20.393514   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.393526   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:20.393537   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:20.393552   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:20.446650   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:20.446716   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:20.460502   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:20.460531   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:20.535717   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.535738   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:20.535751   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:20.619068   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:20.619119   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.159775   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:23.174688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:23.174776   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:23.210990   70480 cri.go:89] found id: ""
	I0729 11:51:23.211017   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.211025   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:23.211031   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:23.211083   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:23.251468   70480 cri.go:89] found id: ""
	I0729 11:51:23.251494   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.251505   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:23.251512   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:23.251567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:23.288902   70480 cri.go:89] found id: ""
	I0729 11:51:23.288950   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.288961   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:23.288969   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:23.289028   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:23.324544   70480 cri.go:89] found id: ""
	I0729 11:51:23.324583   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.324593   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:23.324604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:23.324681   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:23.363138   70480 cri.go:89] found id: ""
	I0729 11:51:23.363170   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.363180   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:23.363188   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:23.363246   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:23.400107   70480 cri.go:89] found id: ""
	I0729 11:51:23.400136   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.400146   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:23.400163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:23.400224   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:23.437129   70480 cri.go:89] found id: ""
	I0729 11:51:23.437169   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.437180   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:23.437189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:23.437251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:23.470782   70480 cri.go:89] found id: ""
	I0729 11:51:23.470811   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.470821   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:23.470831   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:23.470846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:23.557806   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:23.557843   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.601312   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:23.601342   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:23.658042   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:23.658084   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:23.682844   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:23.682878   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:23.773049   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:25.015099   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.518243   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:25.680468   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.680741   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.479529   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:28.978442   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.273263   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:26.286759   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:26.286822   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:26.325303   70480 cri.go:89] found id: ""
	I0729 11:51:26.325330   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.325340   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:26.325347   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:26.325402   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:26.366994   70480 cri.go:89] found id: ""
	I0729 11:51:26.367019   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.367033   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:26.367040   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:26.367100   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:26.407748   70480 cri.go:89] found id: ""
	I0729 11:51:26.407779   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.407789   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:26.407796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:26.407856   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:26.443172   70480 cri.go:89] found id: ""
	I0729 11:51:26.443197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.443206   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:26.443214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:26.443275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:26.479906   70480 cri.go:89] found id: ""
	I0729 11:51:26.479928   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.479937   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:26.479945   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:26.480011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:26.518837   70480 cri.go:89] found id: ""
	I0729 11:51:26.518867   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.518877   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:26.518884   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:26.518939   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:26.557168   70480 cri.go:89] found id: ""
	I0729 11:51:26.557197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.557207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:26.557214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:26.557271   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:26.591670   70480 cri.go:89] found id: ""
	I0729 11:51:26.591699   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.591707   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:26.591715   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:26.591727   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:26.606611   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:26.606641   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:26.675726   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:26.675752   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:26.675768   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:26.755738   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:26.755776   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:26.799482   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:26.799518   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:29.353415   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:29.367062   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:29.367141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:29.404628   70480 cri.go:89] found id: ""
	I0729 11:51:29.404669   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.404677   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:29.404683   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:29.404731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:29.452828   70480 cri.go:89] found id: ""
	I0729 11:51:29.452858   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.452868   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:29.452877   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:29.452936   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:29.488252   70480 cri.go:89] found id: ""
	I0729 11:51:29.488280   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.488288   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:29.488296   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:29.488357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:29.524837   70480 cri.go:89] found id: ""
	I0729 11:51:29.524863   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.524874   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:29.524890   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:29.524938   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:29.564545   70480 cri.go:89] found id: ""
	I0729 11:51:29.564587   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.564598   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:29.564615   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:29.564686   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:29.604254   70480 cri.go:89] found id: ""
	I0729 11:51:29.604282   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.604292   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:29.604299   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:29.604365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:29.640107   70480 cri.go:89] found id: ""
	I0729 11:51:29.640137   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.640147   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:29.640152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:29.640207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:29.677070   70480 cri.go:89] found id: ""
	I0729 11:51:29.677099   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.677109   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:29.677119   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:29.677143   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:29.692434   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:29.692466   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:29.769317   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:29.769344   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:29.769355   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:29.850468   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:29.850505   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:29.890975   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:29.891010   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:30.014896   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.014991   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:29.682442   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.181766   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:34.182032   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:30.979636   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:33.480377   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.445320   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:32.459458   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:32.459534   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:32.500629   70480 cri.go:89] found id: ""
	I0729 11:51:32.500652   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.500659   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:32.500664   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:32.500711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:32.538737   70480 cri.go:89] found id: ""
	I0729 11:51:32.538763   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.538771   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:32.538777   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:32.538837   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:32.578093   70480 cri.go:89] found id: ""
	I0729 11:51:32.578124   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.578134   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:32.578141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:32.578200   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:32.615501   70480 cri.go:89] found id: ""
	I0729 11:51:32.615527   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.615539   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:32.615547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:32.615597   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:32.653232   70480 cri.go:89] found id: ""
	I0729 11:51:32.653262   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.653272   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:32.653279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:32.653338   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:32.691385   70480 cri.go:89] found id: ""
	I0729 11:51:32.691408   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.691418   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:32.691440   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:32.691486   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:32.727625   70480 cri.go:89] found id: ""
	I0729 11:51:32.727657   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.727667   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:32.727674   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:32.727737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:32.762200   70480 cri.go:89] found id: ""
	I0729 11:51:32.762225   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.762232   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:32.762240   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:32.762251   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:32.815020   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:32.815061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:32.830072   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:32.830100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:32.902248   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:32.902278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:32.902292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:32.984571   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:32.984604   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:34.513960   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.514684   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.515512   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.680403   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.681176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.979834   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.482035   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.528850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:35.543568   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:35.543633   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:35.583537   70480 cri.go:89] found id: ""
	I0729 11:51:35.583564   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.583571   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:35.583578   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:35.583632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:35.621997   70480 cri.go:89] found id: ""
	I0729 11:51:35.622021   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.622028   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:35.622034   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:35.622090   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:35.659062   70480 cri.go:89] found id: ""
	I0729 11:51:35.659091   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.659102   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:35.659109   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:35.659169   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:35.696644   70480 cri.go:89] found id: ""
	I0729 11:51:35.696679   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.696689   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:35.696701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:35.696753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:35.734321   70480 cri.go:89] found id: ""
	I0729 11:51:35.734348   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.734358   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:35.734366   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:35.734426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:35.771540   70480 cri.go:89] found id: ""
	I0729 11:51:35.771574   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.771586   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:35.771604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:35.771684   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:35.811293   70480 cri.go:89] found id: ""
	I0729 11:51:35.811318   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.811326   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:35.811332   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:35.811386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:35.857175   70480 cri.go:89] found id: ""
	I0729 11:51:35.857206   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.857217   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:35.857228   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:35.857242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:35.946191   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:35.946210   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:35.946225   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:36.033466   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:36.033501   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:36.072593   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:36.072622   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:36.129193   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:36.129228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:38.645464   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:38.659318   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:38.659378   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:38.697259   70480 cri.go:89] found id: ""
	I0729 11:51:38.697289   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.697298   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:38.697304   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:38.697351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:38.736722   70480 cri.go:89] found id: ""
	I0729 11:51:38.736751   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.736763   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:38.736770   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:38.736828   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:38.773508   70480 cri.go:89] found id: ""
	I0729 11:51:38.773532   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.773539   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:38.773545   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:38.773607   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:38.813147   70480 cri.go:89] found id: ""
	I0729 11:51:38.813177   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.813186   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:38.813193   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:38.813249   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:38.850588   70480 cri.go:89] found id: ""
	I0729 11:51:38.850621   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.850631   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:38.850639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:38.850694   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:38.888260   70480 cri.go:89] found id: ""
	I0729 11:51:38.888293   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.888304   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:38.888313   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:38.888380   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:38.924326   70480 cri.go:89] found id: ""
	I0729 11:51:38.924352   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.924360   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:38.924365   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:38.924426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:38.963356   70480 cri.go:89] found id: ""
	I0729 11:51:38.963386   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.963397   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:38.963408   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:38.963425   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:39.048438   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:39.048472   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:39.087799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:39.087828   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:39.141908   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:39.141945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:39.156242   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:39.156282   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:39.231689   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:41.014799   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.015914   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.180241   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.180737   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:40.980126   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.480593   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.732860   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:41.752371   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:41.752451   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:41.804525   70480 cri.go:89] found id: ""
	I0729 11:51:41.804553   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.804565   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:41.804575   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:41.804632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:41.859983   70480 cri.go:89] found id: ""
	I0729 11:51:41.860010   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.860018   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:41.860024   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:41.860081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:41.909592   70480 cri.go:89] found id: ""
	I0729 11:51:41.909622   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.909632   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:41.909639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:41.909700   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:41.949892   70480 cri.go:89] found id: ""
	I0729 11:51:41.949919   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.949928   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:41.949933   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:41.950011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:41.985334   70480 cri.go:89] found id: ""
	I0729 11:51:41.985360   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.985368   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:41.985374   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:41.985426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:42.028747   70480 cri.go:89] found id: ""
	I0729 11:51:42.028806   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.028818   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:42.028829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:42.028899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:42.063927   70480 cri.go:89] found id: ""
	I0729 11:51:42.063955   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.063965   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:42.063972   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:42.064031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:42.099539   70480 cri.go:89] found id: ""
	I0729 11:51:42.099566   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.099577   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:42.099587   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:42.099602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:42.112852   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:42.112879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:42.185104   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:42.185130   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:42.185142   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:42.265744   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:42.265778   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:42.309451   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:42.309478   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.862271   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:44.876763   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:44.876832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:44.921157   70480 cri.go:89] found id: ""
	I0729 11:51:44.921188   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.921198   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:44.921206   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:44.921266   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:44.960222   70480 cri.go:89] found id: ""
	I0729 11:51:44.960255   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.960265   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:44.960272   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:44.960334   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:44.999994   70480 cri.go:89] found id: ""
	I0729 11:51:45.000025   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.000036   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:45.000045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:45.000108   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:45.037110   70480 cri.go:89] found id: ""
	I0729 11:51:45.037145   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.037156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:45.037163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:45.037215   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:45.074406   70480 cri.go:89] found id: ""
	I0729 11:51:45.074430   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.074440   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:45.074447   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:45.074508   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:45.113069   70480 cri.go:89] found id: ""
	I0729 11:51:45.113097   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.113107   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:45.113115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:45.113178   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:45.158016   70480 cri.go:89] found id: ""
	I0729 11:51:45.158045   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.158055   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:45.158063   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:45.158115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:45.199257   70480 cri.go:89] found id: ""
	I0729 11:51:45.199286   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.199294   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:45.199303   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:45.199314   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.509117   69907 pod_ready.go:81] duration metric: took 4m0.000903528s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	E0729 11:51:44.509148   69907 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:51:44.509164   69907 pod_ready.go:38] duration metric: took 4m6.540840543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:51:44.509191   69907 kubeadm.go:597] duration metric: took 4m16.180899614s to restartPrimaryControlPlane
	W0729 11:51:44.509250   69907 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:51:44.509278   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:51:45.181697   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.682106   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.979275   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.979316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.254060   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:45.254100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:45.269592   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:45.269630   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:45.357469   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:45.357493   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:45.357508   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:45.437760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:45.437806   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:47.980407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:47.998789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:47.998874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:48.039278   70480 cri.go:89] found id: ""
	I0729 11:51:48.039311   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.039321   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:48.039328   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:48.039391   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:48.080277   70480 cri.go:89] found id: ""
	I0729 11:51:48.080312   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.080324   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:48.080332   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:48.080395   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:48.119009   70480 cri.go:89] found id: ""
	I0729 11:51:48.119032   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.119039   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:48.119045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:48.119091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:48.162062   70480 cri.go:89] found id: ""
	I0729 11:51:48.162091   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.162101   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:48.162108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:48.162175   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:48.208089   70480 cri.go:89] found id: ""
	I0729 11:51:48.208120   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.208141   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:48.208148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:48.208214   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:48.254174   70480 cri.go:89] found id: ""
	I0729 11:51:48.254206   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.254217   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:48.254225   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:48.254288   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:48.294959   70480 cri.go:89] found id: ""
	I0729 11:51:48.294988   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.294998   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:48.295005   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:48.295067   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:48.337176   70480 cri.go:89] found id: ""
	I0729 11:51:48.337209   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.337221   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:48.337231   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:48.337249   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:48.392826   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:48.392879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:48.409017   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:48.409043   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:48.489964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:48.489995   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:48.490012   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:48.571448   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:48.571496   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:50.180914   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.181136   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:50.479880   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.977753   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:54.978456   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:51.124524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:51.137808   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:51.137887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:51.178622   70480 cri.go:89] found id: ""
	I0729 11:51:51.178647   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.178656   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:51.178663   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:51.178738   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:51.214985   70480 cri.go:89] found id: ""
	I0729 11:51:51.215008   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.215015   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:51.215021   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:51.215071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:51.250529   70480 cri.go:89] found id: ""
	I0729 11:51:51.250575   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.250586   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:51.250594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:51.250648   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:51.284745   70480 cri.go:89] found id: ""
	I0729 11:51:51.284774   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.284781   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:51.284787   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:51.284844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:51.319448   70480 cri.go:89] found id: ""
	I0729 11:51:51.319476   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.319486   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:51.319494   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:51.319559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:51.358828   70480 cri.go:89] found id: ""
	I0729 11:51:51.358861   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.358868   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:51.358875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:51.358934   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:51.398326   70480 cri.go:89] found id: ""
	I0729 11:51:51.398356   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.398363   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:51.398369   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:51.398424   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:51.450495   70480 cri.go:89] found id: ""
	I0729 11:51:51.450523   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.450530   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:51.450539   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:51.450549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:51.495351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:51.495381   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:51.545937   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:51.545972   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:51.560738   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:51.560769   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:51.640187   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:51.640209   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:51.640223   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.230801   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:54.244231   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:54.244307   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:54.279319   70480 cri.go:89] found id: ""
	I0729 11:51:54.279349   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.279359   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:54.279366   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:54.279426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:54.316648   70480 cri.go:89] found id: ""
	I0729 11:51:54.316675   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.316685   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:54.316691   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:54.316751   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:54.353605   70480 cri.go:89] found id: ""
	I0729 11:51:54.353631   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.353641   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:54.353646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:54.353705   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:54.389695   70480 cri.go:89] found id: ""
	I0729 11:51:54.389724   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.389734   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:54.389739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:54.389789   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:54.425572   70480 cri.go:89] found id: ""
	I0729 11:51:54.425610   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.425620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:54.425627   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:54.425693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:54.466041   70480 cri.go:89] found id: ""
	I0729 11:51:54.466084   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.466101   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:54.466111   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:54.466160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:54.504916   70480 cri.go:89] found id: ""
	I0729 11:51:54.504943   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.504950   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:54.504956   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:54.505003   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:54.542093   70480 cri.go:89] found id: ""
	I0729 11:51:54.542133   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.542141   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:54.542149   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:54.542161   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:54.555521   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:54.555549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:54.639080   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:54.639104   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:54.639115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.721817   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:54.721858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:54.760794   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:54.760831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:54.681184   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.179812   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.180919   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:56.978928   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.479018   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.314549   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:57.328865   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:57.328941   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:57.364546   70480 cri.go:89] found id: ""
	I0729 11:51:57.364577   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.364587   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:57.364594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:57.364665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:57.401965   70480 cri.go:89] found id: ""
	I0729 11:51:57.401994   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.402005   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:57.402013   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:57.402072   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:57.436918   70480 cri.go:89] found id: ""
	I0729 11:51:57.436942   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.436975   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:57.436983   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:57.437042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:57.471476   70480 cri.go:89] found id: ""
	I0729 11:51:57.471503   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.471511   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:57.471519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:57.471576   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:57.509934   70480 cri.go:89] found id: ""
	I0729 11:51:57.509962   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.509972   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:57.509980   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:57.510038   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:57.552095   70480 cri.go:89] found id: ""
	I0729 11:51:57.552123   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.552133   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:57.552141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:57.552204   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:57.586486   70480 cri.go:89] found id: ""
	I0729 11:51:57.586507   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.586514   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:57.586519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:57.586580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:57.622708   70480 cri.go:89] found id: ""
	I0729 11:51:57.622737   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.622746   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:57.622757   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:57.622771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:57.637102   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:57.637133   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:57.710960   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:57.710981   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:57.710994   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:57.803522   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:57.803559   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:57.845804   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:57.845838   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:01.680142   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.682844   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:01.978739   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.973441   70231 pod_ready.go:81] duration metric: took 4m0.000922355s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:03.973469   70231 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:03.973488   70231 pod_ready.go:38] duration metric: took 4m6.983171556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:03.973523   70231 kubeadm.go:597] duration metric: took 4m14.830269847s to restartPrimaryControlPlane
	W0729 11:52:03.973614   70231 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:03.973646   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:00.398227   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:00.412064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:00.412139   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:00.446661   70480 cri.go:89] found id: ""
	I0729 11:52:00.446716   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.446729   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:00.446741   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:00.446793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:00.482234   70480 cri.go:89] found id: ""
	I0729 11:52:00.482260   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.482270   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:00.482290   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:00.482357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:00.520087   70480 cri.go:89] found id: ""
	I0729 11:52:00.520125   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.520136   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:00.520143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:00.520203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:00.556889   70480 cri.go:89] found id: ""
	I0729 11:52:00.556913   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.556924   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:00.556931   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:00.556996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:00.593521   70480 cri.go:89] found id: ""
	I0729 11:52:00.593559   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.593569   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:00.593577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:00.593644   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:00.631849   70480 cri.go:89] found id: ""
	I0729 11:52:00.631879   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.631889   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:00.631897   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:00.631956   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:00.669206   70480 cri.go:89] found id: ""
	I0729 11:52:00.669235   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.669246   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:00.669254   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:00.669314   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:00.707653   70480 cri.go:89] found id: ""
	I0729 11:52:00.707681   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.707692   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:00.707702   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:00.707720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:00.722275   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:00.722305   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:00.796212   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:00.796240   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:00.796254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:00.882088   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:00.882135   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:00.924217   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:00.924248   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.479929   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:03.493148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:03.493211   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:03.528114   70480 cri.go:89] found id: ""
	I0729 11:52:03.528158   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.528169   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:03.528177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:03.528233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:03.564509   70480 cri.go:89] found id: ""
	I0729 11:52:03.564542   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.564552   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:03.564559   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:03.564628   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:03.599850   70480 cri.go:89] found id: ""
	I0729 11:52:03.599884   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.599897   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:03.599913   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:03.599977   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:03.638988   70480 cri.go:89] found id: ""
	I0729 11:52:03.639020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.639031   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:03.639039   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:03.639112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:03.678544   70480 cri.go:89] found id: ""
	I0729 11:52:03.678572   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.678581   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:03.678589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:03.678651   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:03.717265   70480 cri.go:89] found id: ""
	I0729 11:52:03.717297   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.717307   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:03.717314   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:03.717379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:03.756479   70480 cri.go:89] found id: ""
	I0729 11:52:03.756504   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.756512   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:03.756519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:03.756570   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:03.791992   70480 cri.go:89] found id: ""
	I0729 11:52:03.792020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.792031   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:03.792042   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:03.792056   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:03.881378   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:03.881417   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:03.918866   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:03.918902   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.972005   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:03.972041   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:03.989650   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:03.989680   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:04.069522   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:06.182277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:08.681543   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:06.570567   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:06.588524   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:06.588594   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:06.627028   70480 cri.go:89] found id: ""
	I0729 11:52:06.627071   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.627082   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:06.627089   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:06.627154   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:06.669504   70480 cri.go:89] found id: ""
	I0729 11:52:06.669536   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.669548   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:06.669558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:06.669620   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:06.709546   70480 cri.go:89] found id: ""
	I0729 11:52:06.709573   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.709593   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:06.709603   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:06.709661   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:06.745539   70480 cri.go:89] found id: ""
	I0729 11:52:06.745568   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.745577   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:06.745585   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:06.745645   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:06.783924   70480 cri.go:89] found id: ""
	I0729 11:52:06.783960   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.783971   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:06.783978   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:06.784040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:06.819838   70480 cri.go:89] found id: ""
	I0729 11:52:06.819869   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.819879   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:06.819886   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:06.819950   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:06.858057   70480 cri.go:89] found id: ""
	I0729 11:52:06.858085   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.858102   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:06.858110   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:06.858186   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:06.896844   70480 cri.go:89] found id: ""
	I0729 11:52:06.896875   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.896885   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:06.896896   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:06.896911   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:06.953126   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:06.953166   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:06.967548   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:06.967579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:07.046716   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:07.046740   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:07.046756   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:07.129264   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:07.129299   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:09.672314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:09.687133   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:09.687202   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:09.728280   70480 cri.go:89] found id: ""
	I0729 11:52:09.728307   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.728316   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:09.728322   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:09.728379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:09.765178   70480 cri.go:89] found id: ""
	I0729 11:52:09.765214   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.765225   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:09.765233   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:09.765293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:09.801181   70480 cri.go:89] found id: ""
	I0729 11:52:09.801216   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.801225   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:09.801233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:09.801294   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:09.845088   70480 cri.go:89] found id: ""
	I0729 11:52:09.845118   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.845129   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:09.845137   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:09.845198   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:09.883874   70480 cri.go:89] found id: ""
	I0729 11:52:09.883907   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.883918   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:09.883925   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:09.883992   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:09.918273   70480 cri.go:89] found id: ""
	I0729 11:52:09.918302   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.918312   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:09.918321   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:09.918386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:09.978461   70480 cri.go:89] found id: ""
	I0729 11:52:09.978487   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.978494   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:09.978500   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:09.978546   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:10.022219   70480 cri.go:89] found id: ""
	I0729 11:52:10.022247   70480 logs.go:276] 0 containers: []
	W0729 11:52:10.022255   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:10.022264   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:10.022274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:10.076181   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:10.076228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:10.090567   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:10.090600   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:10.159576   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:10.159605   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:10.159620   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:11.181276   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:13.181424   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:10.242116   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:10.242165   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:12.784286   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:12.798015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:12.798111   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:12.834920   70480 cri.go:89] found id: ""
	I0729 11:52:12.834951   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.834962   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:12.834969   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:12.835030   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:12.876545   70480 cri.go:89] found id: ""
	I0729 11:52:12.876578   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.876589   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:12.876596   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:12.876665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:12.912912   70480 cri.go:89] found id: ""
	I0729 11:52:12.912937   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.912944   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:12.912950   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:12.913006   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:12.948118   70480 cri.go:89] found id: ""
	I0729 11:52:12.948148   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.948156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:12.948161   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:12.948207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:12.984339   70480 cri.go:89] found id: ""
	I0729 11:52:12.984364   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.984371   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:12.984377   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:12.984433   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:13.024870   70480 cri.go:89] found id: ""
	I0729 11:52:13.024906   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.024916   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:13.024924   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:13.024989   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:13.067951   70480 cri.go:89] found id: ""
	I0729 11:52:13.067988   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.067999   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:13.068007   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:13.068068   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:13.105095   70480 cri.go:89] found id: ""
	I0729 11:52:13.105126   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.105136   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:13.105144   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:13.105158   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:13.167486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:13.167537   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:13.183020   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:13.183046   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:13.264702   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:13.264734   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:13.264750   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:13.346760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:13.346795   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:15.887849   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:15.901560   70480 kubeadm.go:597] duration metric: took 4m4.683363388s to restartPrimaryControlPlane
	W0729 11:52:15.901628   70480 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:15.901659   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:16.377916   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.396247   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.408224   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.420998   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.421024   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.421073   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.432646   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.432712   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.444442   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.455098   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.455175   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.469621   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.482797   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.482869   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.495535   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.508873   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.508967   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.519797   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.590877   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:52:16.590933   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.768006   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.768167   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.768313   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:16.980586   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:16.523230   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.013927797s)
	I0729 11:52:16.523296   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.541674   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.553585   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.565171   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.565196   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.565237   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.575919   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.576023   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.588641   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.599947   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.600028   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.612623   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.624420   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.624486   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.639271   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.649979   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.650057   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.661423   69907 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.718013   69907 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:16.718138   69907 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.870793   69907 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.870955   69907 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.871090   69907 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:17.100094   69907 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:17.101792   69907 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:17.101895   69907 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:17.101999   69907 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:17.102129   69907 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:17.102237   69907 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:17.102339   69907 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:17.102419   69907 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:17.102523   69907 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:17.102607   69907 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:17.102731   69907 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:17.103613   69907 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:17.103841   69907 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:17.103923   69907 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.439592   69907 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.517503   69907 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:17.731672   69907 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.877789   69907 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.930274   69907 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.930777   69907 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:17.933362   69907 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:16.982617   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:16.982732   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:16.982826   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:16.982935   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:16.983015   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:16.983079   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:16.983127   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:16.983180   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:16.983230   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:16.983291   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:16.983354   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:16.983386   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:16.983433   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.523710   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.622636   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.732508   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.869921   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.892581   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.893213   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.893300   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.049043   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:17.935629   69907 out.go:204]   - Booting up control plane ...
	I0729 11:52:17.935753   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:17.935870   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:17.935955   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:17.961756   69907 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.962814   69907 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.962879   69907 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.102662   69907 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:18.102806   69907 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:15.181970   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:17.682108   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:18.051012   70480 out.go:204]   - Booting up control plane ...
	I0729 11:52:18.051192   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:18.058194   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:18.062340   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:18.063509   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:18.066121   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:52:19.116356   69907 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.010567801s
	I0729 11:52:19.116461   69907 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:24.118059   69907 kubeadm.go:310] [api-check] The API server is healthy after 5.002510977s
	I0729 11:52:24.132586   69907 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:24.148251   69907 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:24.188769   69907 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:24.188956   69907 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-731235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:24.205790   69907 kubeadm.go:310] [bootstrap-token] Using token: pvm7ux.41geojc66jibd993
	I0729 11:52:20.181703   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:22.181889   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.182317   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.207334   69907 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:24.207519   69907 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:24.213637   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:24.226771   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:24.231379   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:24.239349   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:24.248803   69907 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:24.524966   69907 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:24.961557   69907 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:25.522876   69907 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:25.523985   69907 kubeadm.go:310] 
	I0729 11:52:25.524083   69907 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:25.524093   69907 kubeadm.go:310] 
	I0729 11:52:25.524203   69907 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:25.524234   69907 kubeadm.go:310] 
	I0729 11:52:25.524273   69907 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:25.524353   69907 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:25.524441   69907 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:25.524460   69907 kubeadm.go:310] 
	I0729 11:52:25.524520   69907 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:25.524527   69907 kubeadm.go:310] 
	I0729 11:52:25.524578   69907 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:25.524584   69907 kubeadm.go:310] 
	I0729 11:52:25.524625   69907 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:25.524728   69907 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:25.524834   69907 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:25.524843   69907 kubeadm.go:310] 
	I0729 11:52:25.524957   69907 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:25.525047   69907 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:25.525054   69907 kubeadm.go:310] 
	I0729 11:52:25.525175   69907 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525314   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:25.525343   69907 kubeadm.go:310] 	--control-plane 
	I0729 11:52:25.525351   69907 kubeadm.go:310] 
	I0729 11:52:25.525449   69907 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:25.525463   69907 kubeadm.go:310] 
	I0729 11:52:25.525569   69907 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525709   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:25.526283   69907 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:25.526361   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:52:25.526378   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:25.528362   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:25.529726   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:25.546760   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:25.571336   69907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:25.571457   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-731235 minikube.k8s.io/updated_at=2024_07_29T11_52_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=embed-certs-731235 minikube.k8s.io/primary=true
	I0729 11:52:25.571460   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:25.600643   69907 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:25.771231   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.271938   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.771337   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.271880   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.772276   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.271327   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.771854   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.680959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.180277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.271904   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:29.771958   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.271342   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.771316   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.271539   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.771490   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.271537   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.771969   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.271498   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.771963   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.681002   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.180450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.271709   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:34.771968   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.271985   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.771798   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.271877   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.771950   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.271225   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.771622   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.271354   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.369678   69907 kubeadm.go:1113] duration metric: took 12.798280829s to wait for elevateKubeSystemPrivileges
	I0729 11:52:38.369716   69907 kubeadm.go:394] duration metric: took 5m10.090728575s to StartCluster
	I0729 11:52:38.369737   69907 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.369812   69907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:38.371527   69907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.371774   69907 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:38.371829   69907 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:38.371904   69907 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-731235"
	I0729 11:52:38.371925   69907 addons.go:69] Setting default-storageclass=true in profile "embed-certs-731235"
	I0729 11:52:38.371956   69907 addons.go:69] Setting metrics-server=true in profile "embed-certs-731235"
	I0729 11:52:38.371977   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:38.371991   69907 addons.go:234] Setting addon metrics-server=true in "embed-certs-731235"
	W0729 11:52:38.371999   69907 addons.go:243] addon metrics-server should already be in state true
	I0729 11:52:38.372041   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.371966   69907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-731235"
	I0729 11:52:38.371936   69907 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-731235"
	W0729 11:52:38.372204   69907 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:38.372240   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.372365   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372402   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372460   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372615   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372662   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.373455   69907 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:38.374757   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:38.388333   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0729 11:52:38.388901   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.389443   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.389467   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.389661   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0729 11:52:38.389858   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.390469   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.390499   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.390717   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.391258   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.391278   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.391622   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0729 11:52:38.391655   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.391937   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.391966   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.392511   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.392538   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.392904   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.393459   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.393491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.395933   69907 addons.go:234] Setting addon default-storageclass=true in "embed-certs-731235"
	W0729 11:52:38.395953   69907 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:38.395980   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.396342   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.396371   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.411784   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0729 11:52:38.412254   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.412549   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34211
	I0729 11:52:38.412811   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.412831   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.412911   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.413173   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413340   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.413470   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.413488   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.413830   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413997   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.414897   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0729 11:52:38.415312   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.415395   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.415753   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.415772   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.415918   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.416126   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.416663   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.416690   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.418043   69907 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:38.418047   69907 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:38.419620   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:38.419640   69907 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:38.419661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.419693   69907 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:38.419702   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:38.419714   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.423646   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424115   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424184   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424208   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424370   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.424573   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.424631   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424647   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424722   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.424821   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.425101   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.425266   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.425394   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.425528   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.432777   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0729 11:52:38.433219   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.433735   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.433759   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.434121   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.434299   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.435957   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.436176   69907 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.436195   69907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:38.436216   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.438989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439431   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.439508   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439627   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.439783   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.439929   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.440077   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.598513   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:38.627199   69907 node_ready.go:35] waiting up to 6m0s for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639168   69907 node_ready.go:49] node "embed-certs-731235" has status "Ready":"True"
	I0729 11:52:38.639199   69907 node_ready.go:38] duration metric: took 11.953793ms for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639208   69907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:38.644562   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:38.678019   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:38.678042   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:38.706214   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:52:38.706247   69907 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:52:38.745796   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.745824   69907 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:38.767879   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.778016   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.790742   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:36.181329   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:38.183254   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:39.974095   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196041477s)
	I0729 11:52:39.974096   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206172307s)
	I0729 11:52:39.974194   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974247   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974203   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974345   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974811   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974831   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974840   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974847   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974857   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.974925   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974938   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974946   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974955   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.975075   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.975165   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.975244   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976561   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.976579   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976577   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.976589   69907 addons.go:475] Verifying addon metrics-server=true in "embed-certs-731235"
	I0729 11:52:39.999773   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.999799   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.000097   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.000118   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.026995   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.236214166s)
	I0729 11:52:40.027052   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027063   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027383   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.027402   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.027412   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027422   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027387   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029105   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.029109   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029124   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.031066   69907 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner
	I0729 11:52:36.127977   70231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.15430735s)
	I0729 11:52:36.128057   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:36.147540   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:36.159519   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:36.171332   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:36.171353   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:36.171406   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:52:36.182915   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:36.183084   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:36.193912   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:52:36.203972   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:36.204036   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:36.213886   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.223205   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:36.223260   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.235379   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:52:36.245392   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:36.245461   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:36.255495   70231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:36.468759   70231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:40.032797   69907 addons.go:510] duration metric: took 1.660964221s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner]
	I0729 11:52:40.654126   69907 pod_ready.go:102] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:41.173676   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.173708   69907 pod_ready.go:81] duration metric: took 2.529122203s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.173721   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183179   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.183207   69907 pod_ready.go:81] duration metric: took 9.478224ms for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183220   69907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192149   69907 pod_ready.go:92] pod "etcd-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.192177   69907 pod_ready.go:81] duration metric: took 8.949045ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192189   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199322   69907 pod_ready.go:92] pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.199347   69907 pod_ready.go:81] duration metric: took 7.150124ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199360   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210464   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.210491   69907 pod_ready.go:81] duration metric: took 11.123649ms for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210504   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549786   69907 pod_ready.go:92] pod "kube-proxy-ch48n" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.549814   69907 pod_ready.go:81] duration metric: took 339.30332ms for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549828   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949607   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.949629   69907 pod_ready.go:81] duration metric: took 399.794484ms for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949637   69907 pod_ready.go:38] duration metric: took 3.310420523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:41.949650   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:52:41.949732   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:41.967899   69907 api_server.go:72] duration metric: took 3.596093405s to wait for apiserver process to appear ...
	I0729 11:52:41.967933   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:52:41.967957   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:52:41.973064   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:52:41.974128   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:52:41.974151   69907 api_server.go:131] duration metric: took 6.211514ms to wait for apiserver health ...
	I0729 11:52:41.974158   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:52:42.152607   69907 system_pods.go:59] 9 kube-system pods found
	I0729 11:52:42.152648   69907 system_pods.go:61] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.152656   69907 system_pods.go:61] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.152663   69907 system_pods.go:61] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.152670   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.152674   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.152680   69907 system_pods.go:61] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.152685   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.152694   69907 system_pods.go:61] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.152702   69907 system_pods.go:61] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.152714   69907 system_pods.go:74] duration metric: took 178.548453ms to wait for pod list to return data ...
	I0729 11:52:42.152728   69907 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:52:42.349148   69907 default_sa.go:45] found service account: "default"
	I0729 11:52:42.349182   69907 default_sa.go:55] duration metric: took 196.446704ms for default service account to be created ...
	I0729 11:52:42.349192   69907 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:52:42.552384   69907 system_pods.go:86] 9 kube-system pods found
	I0729 11:52:42.552416   69907 system_pods.go:89] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.552425   69907 system_pods.go:89] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.552431   69907 system_pods.go:89] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.552437   69907 system_pods.go:89] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.552442   69907 system_pods.go:89] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.552448   69907 system_pods.go:89] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.552453   69907 system_pods.go:89] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.552462   69907 system_pods.go:89] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.552472   69907 system_pods.go:89] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.552483   69907 system_pods.go:126] duration metric: took 203.284903ms to wait for k8s-apps to be running ...
	I0729 11:52:42.552492   69907 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:52:42.552546   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:42.569158   69907 system_svc.go:56] duration metric: took 16.657226ms WaitForService to wait for kubelet
	I0729 11:52:42.569186   69907 kubeadm.go:582] duration metric: took 4.19738713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:52:42.569205   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:52:42.749356   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:52:42.749385   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:52:42.749399   69907 node_conditions.go:105] duration metric: took 180.189313ms to run NodePressure ...
	I0729 11:52:42.749411   69907 start.go:241] waiting for startup goroutines ...
	I0729 11:52:42.749417   69907 start.go:246] waiting for cluster config update ...
	I0729 11:52:42.749427   69907 start.go:255] writing updated cluster config ...
	I0729 11:52:42.749672   69907 ssh_runner.go:195] Run: rm -f paused
	I0729 11:52:42.807579   69907 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:52:42.809609   69907 out.go:177] * Done! kubectl is now configured to use "embed-certs-731235" cluster and "default" namespace by default
	I0729 11:52:40.681693   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:42.685146   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.646240   70231 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:46.646305   70231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:46.646407   70231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:46.646537   70231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:46.646653   70231 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:46.646749   70231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:46.648483   70231 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:46.648572   70231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:46.648626   70231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:46.648719   70231 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:46.648820   70231 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:46.648941   70231 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:46.649013   70231 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:46.649068   70231 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:46.649121   70231 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:46.649182   70231 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:46.649248   70231 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:46.649294   70231 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:46.649378   70231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:46.649455   70231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:46.649529   70231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:46.649609   70231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:46.649693   70231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:46.649778   70231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:46.649912   70231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:46.650023   70231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:46.651575   70231 out.go:204]   - Booting up control plane ...
	I0729 11:52:46.651657   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:46.651723   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:46.651793   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:46.651893   70231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:46.651963   70231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:46.651996   70231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:46.652155   70231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:46.652258   70231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:46.652315   70231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00230111s
	I0729 11:52:46.652381   70231 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:46.652444   70231 kubeadm.go:310] [api-check] The API server is healthy after 5.502783682s
	I0729 11:52:46.652588   70231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:46.652734   70231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:46.652802   70231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:46.652991   70231 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-754486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:46.653041   70231 kubeadm.go:310] [bootstrap-token] Using token: 341fdm.tm8thttie16wi2qy
	I0729 11:52:46.654343   70231 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:46.654458   70231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:46.654555   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:46.654745   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:46.654914   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:46.655023   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:46.655094   70231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:46.655202   70231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:46.655242   70231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:46.655285   70231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:46.655293   70231 kubeadm.go:310] 
	I0729 11:52:46.655349   70231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:46.655355   70231 kubeadm.go:310] 
	I0729 11:52:46.655427   70231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:46.655433   70231 kubeadm.go:310] 
	I0729 11:52:46.655453   70231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:46.655509   70231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:46.655576   70231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:46.655586   70231 kubeadm.go:310] 
	I0729 11:52:46.655653   70231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:46.655660   70231 kubeadm.go:310] 
	I0729 11:52:46.655702   70231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:46.655708   70231 kubeadm.go:310] 
	I0729 11:52:46.655772   70231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:46.655861   70231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:46.655975   70231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:46.656000   70231 kubeadm.go:310] 
	I0729 11:52:46.656118   70231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:46.656223   70231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:46.656233   70231 kubeadm.go:310] 
	I0729 11:52:46.656344   70231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656477   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:46.656502   70231 kubeadm.go:310] 	--control-plane 
	I0729 11:52:46.656508   70231 kubeadm.go:310] 
	I0729 11:52:46.656580   70231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:46.656586   70231 kubeadm.go:310] 
	I0729 11:52:46.656669   70231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656831   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:46.656851   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:52:46.656862   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:46.659007   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:45.180215   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:47.181213   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.660238   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:46.671866   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:46.692991   70231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-754486 minikube.k8s.io/updated_at=2024_07_29T11_52_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=default-k8s-diff-port-754486 minikube.k8s.io/primary=true
	I0729 11:52:46.897228   70231 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:46.897373   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.398474   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.898225   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.397547   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.897716   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.398393   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.898110   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.680176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:51.680900   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:53.681105   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:50.397646   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:50.897618   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.398130   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.897444   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.398334   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.898233   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.397587   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.898255   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.397634   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.898138   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.182828   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:56.674072   69419 pod_ready.go:81] duration metric: took 4m0.000131876s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:56.674094   69419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:56.674113   69419 pod_ready.go:38] duration metric: took 4m9.054741116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:56.674144   69419 kubeadm.go:597] duration metric: took 4m16.587842765s to restartPrimaryControlPlane
	W0729 11:52:56.674197   69419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:56.674234   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:55.398096   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:55.897565   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.397785   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.897860   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.397925   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.897989   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.397500   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.897468   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.398228   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.483894   70231 kubeadm.go:1113] duration metric: took 12.790894124s to wait for elevateKubeSystemPrivileges
	I0729 11:52:59.483924   70231 kubeadm.go:394] duration metric: took 5m10.397319925s to StartCluster
	I0729 11:52:59.483941   70231 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.484019   70231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:59.485737   70231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.486008   70231 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:59.486074   70231 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:59.486163   70231 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486195   70231 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754486"
	I0729 11:52:59.486196   70231 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486210   70231 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486238   70231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754486"
	I0729 11:52:59.486251   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:59.486256   70231 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.486266   70231 addons.go:243] addon metrics-server should already be in state true
	W0729 11:52:59.486205   70231 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:59.486295   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486307   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486550   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486555   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486572   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486573   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486617   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.487888   70231 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:59.489501   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:59.502095   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0729 11:52:59.502614   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.502832   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0729 11:52:59.503207   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503229   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.503252   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.503805   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503829   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.504128   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504216   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504317   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.504801   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.504847   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.505348   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I0729 11:52:59.505701   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.506318   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.506342   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.506738   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.507261   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.507290   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.508065   70231 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.508084   70231 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:59.508111   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.508423   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.508462   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.526240   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0729 11:52:59.526269   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0729 11:52:59.526313   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0729 11:52:59.526654   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526763   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526826   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.527214   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527230   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527351   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527388   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527405   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527429   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527668   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527715   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527901   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.527931   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.528030   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.528913   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.528940   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.529836   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.530004   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.532077   70231 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:59.532101   70231 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:59.533597   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:59.533619   70231 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:59.533641   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.533645   70231 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:59.533659   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:59.533681   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.538047   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538082   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538654   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538669   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538679   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538686   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538693   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538864   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538889   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539065   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539239   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539237   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.539374   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.546505   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0729 11:52:59.546918   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.547428   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.547455   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.547790   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.548011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.549607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.549899   70231 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.549915   70231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:59.549934   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.553591   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.555251   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.555814   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.556005   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.556154   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.758973   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:59.809677   70231 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818208   70231 node_ready.go:49] node "default-k8s-diff-port-754486" has status "Ready":"True"
	I0729 11:52:59.818252   70231 node_ready.go:38] duration metric: took 8.523612ms for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818264   70231 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:59.825340   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:59.935053   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.954324   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:59.954346   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:59.962991   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:00.052728   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:00.052754   70231 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:00.168588   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.168620   70231 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:58.067350   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:52:58.067472   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:52:58.067690   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:00.230134   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.485028   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485062   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485424   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485447   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.485461   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485470   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485716   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485731   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.502040   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.502061   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.502386   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.502410   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.400774   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.437744399s)
	I0729 11:53:01.400842   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.400856   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401229   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401248   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.401284   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.401378   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.401387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401637   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401648   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408496   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.178316081s)
	I0729 11:53:01.408558   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408577   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.408859   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.408879   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408859   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.408904   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408917   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.409181   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.409218   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.409232   70231 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754486"
	I0729 11:53:01.409254   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.411682   70231 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 11:53:01.413048   70231 addons.go:510] duration metric: took 1.926975712s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 11:53:01.831515   70231 pod_ready.go:102] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:02.331492   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.331518   70231 pod_ready.go:81] duration metric: took 2.506145957s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.331530   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341152   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.341175   70231 pod_ready.go:81] duration metric: took 9.638268ms for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341183   70231 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346946   70231 pod_ready.go:92] pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.346971   70231 pod_ready.go:81] duration metric: took 5.77844ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346981   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351401   70231 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.351423   70231 pod_ready.go:81] duration metric: took 4.432109ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351435   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355410   70231 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.355428   70231 pod_ready.go:81] duration metric: took 3.986166ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355439   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729604   70231 pod_ready.go:92] pod "kube-proxy-7gkd8" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.729634   70231 pod_ready.go:81] duration metric: took 374.188296ms for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729653   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130027   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:03.130052   70231 pod_ready.go:81] duration metric: took 400.392433ms for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130061   70231 pod_ready.go:38] duration metric: took 3.311785643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:03.130077   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:03.130134   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:03.152134   70231 api_server.go:72] duration metric: took 3.666086394s to wait for apiserver process to appear ...
	I0729 11:53:03.152164   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:03.152188   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:53:03.157357   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:53:03.158235   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:53:03.158254   70231 api_server.go:131] duration metric: took 6.083486ms to wait for apiserver health ...
	I0729 11:53:03.158261   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:03.333517   70231 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:03.333547   70231 system_pods.go:61] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.333552   70231 system_pods.go:61] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.333556   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.333559   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.333563   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.333566   70231 system_pods.go:61] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.333568   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.333574   70231 system_pods.go:61] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.333577   70231 system_pods.go:61] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.333586   70231 system_pods.go:74] duration metric: took 175.319992ms to wait for pod list to return data ...
	I0729 11:53:03.333595   70231 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:03.529964   70231 default_sa.go:45] found service account: "default"
	I0729 11:53:03.529989   70231 default_sa.go:55] duration metric: took 196.388041ms for default service account to be created ...
	I0729 11:53:03.529998   70231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:03.733015   70231 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:03.733051   70231 system_pods.go:89] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.733058   70231 system_pods.go:89] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.733062   70231 system_pods.go:89] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.733066   70231 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.733070   70231 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.733075   70231 system_pods.go:89] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.733081   70231 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.733090   70231 system_pods.go:89] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.733097   70231 system_pods.go:89] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.733108   70231 system_pods.go:126] duration metric: took 203.104097ms to wait for k8s-apps to be running ...
	I0729 11:53:03.733121   70231 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:03.733165   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:03.749014   70231 system_svc.go:56] duration metric: took 15.886799ms WaitForService to wait for kubelet
	I0729 11:53:03.749045   70231 kubeadm.go:582] duration metric: took 4.263001752s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:03.749070   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:03.930356   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:03.930380   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:03.930390   70231 node_conditions.go:105] duration metric: took 181.31486ms to run NodePressure ...
	I0729 11:53:03.930399   70231 start.go:241] waiting for startup goroutines ...
	I0729 11:53:03.930406   70231 start.go:246] waiting for cluster config update ...
	I0729 11:53:03.930417   70231 start.go:255] writing updated cluster config ...
	I0729 11:53:03.930690   70231 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:03.984862   70231 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:53:03.986829   70231 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754486" cluster and "default" namespace by default
	I0729 11:53:03.068218   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:03.068464   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:13.068777   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:13.069011   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:23.088658   69419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.414400207s)
	I0729 11:53:23.088743   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:23.104735   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:53:23.115145   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:53:23.125890   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:53:23.125913   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:53:23.125969   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:53:23.136854   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:53:23.136914   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:53:23.148400   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:53:23.157595   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:53:23.157670   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:53:23.167281   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.177119   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:53:23.177176   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.187359   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:53:23.197033   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:53:23.197110   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:53:23.207490   69419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:53:23.254112   69419 kubeadm.go:310] W0729 11:53:23.231768    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.254983   69419 kubeadm.go:310] W0729 11:53:23.232599    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.383993   69419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:53:32.410305   69419 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 11:53:32.410378   69419 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:53:32.410483   69419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:53:32.410611   69419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:53:32.410758   69419 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 11:53:32.410840   69419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:53:32.412547   69419 out.go:204]   - Generating certificates and keys ...
	I0729 11:53:32.412651   69419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:53:32.412761   69419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:53:32.412879   69419 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:53:32.412973   69419 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:53:32.413101   69419 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:53:32.413176   69419 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:53:32.413228   69419 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:53:32.413279   69419 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:53:32.413346   69419 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:53:32.413427   69419 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:53:32.413482   69419 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:53:32.413577   69419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:53:32.413644   69419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:53:32.413717   69419 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:53:32.413795   69419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:53:32.413880   69419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:53:32.413970   69419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:53:32.414075   69419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:53:32.414167   69419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:53:32.415701   69419 out.go:204]   - Booting up control plane ...
	I0729 11:53:32.415817   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:53:32.415927   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:53:32.416034   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:53:32.416205   69419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:53:32.416312   69419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:53:32.416350   69419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:53:32.416466   69419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:53:32.416564   69419 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:53:32.416658   69419 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.786281ms
	I0729 11:53:32.416730   69419 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:53:32.416803   69419 kubeadm.go:310] [api-check] The API server is healthy after 5.501546935s
	I0729 11:53:32.416941   69419 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:53:32.417099   69419 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:53:32.417184   69419 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:53:32.417349   69419 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-297799 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:53:32.417434   69419 kubeadm.go:310] [bootstrap-token] Using token: 9fg92x.rq4eihzyqcflv0gj
	I0729 11:53:32.418783   69419 out.go:204]   - Configuring RBAC rules ...
	I0729 11:53:32.418899   69419 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:53:32.418969   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:53:32.419100   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:53:32.419239   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:53:32.419337   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:53:32.419423   69419 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:53:32.419544   69419 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:53:32.419594   69419 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:53:32.419633   69419 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:53:32.419639   69419 kubeadm.go:310] 
	I0729 11:53:32.419686   69419 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:53:32.419695   69419 kubeadm.go:310] 
	I0729 11:53:32.419756   69419 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:53:32.419762   69419 kubeadm.go:310] 
	I0729 11:53:32.419802   69419 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:53:32.419858   69419 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:53:32.419901   69419 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:53:32.419911   69419 kubeadm.go:310] 
	I0729 11:53:32.419965   69419 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:53:32.419971   69419 kubeadm.go:310] 
	I0729 11:53:32.420017   69419 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:53:32.420025   69419 kubeadm.go:310] 
	I0729 11:53:32.420072   69419 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:53:32.420137   69419 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:53:32.420200   69419 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:53:32.420205   69419 kubeadm.go:310] 
	I0729 11:53:32.420277   69419 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:53:32.420340   69419 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:53:32.420345   69419 kubeadm.go:310] 
	I0729 11:53:32.420416   69419 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420506   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:53:32.420531   69419 kubeadm.go:310] 	--control-plane 
	I0729 11:53:32.420544   69419 kubeadm.go:310] 
	I0729 11:53:32.420645   69419 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:53:32.420654   69419 kubeadm.go:310] 
	I0729 11:53:32.420765   69419 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420895   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:53:32.420911   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:53:32.420920   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:53:32.422438   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:53:32.423731   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:53:32.435581   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:53:32.457560   69419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:53:32.457665   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:32.457719   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-297799 minikube.k8s.io/updated_at=2024_07_29T11_53_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=no-preload-297799 minikube.k8s.io/primary=true
	I0729 11:53:32.486072   69419 ops.go:34] apiserver oom_adj: -16
	I0729 11:53:32.674003   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.174011   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.674077   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:34.174383   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.069886   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:33.070112   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:34.674510   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.174124   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.674135   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.174420   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.674370   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.787916   69419 kubeadm.go:1113] duration metric: took 4.330303492s to wait for elevateKubeSystemPrivileges
	I0729 11:53:36.787961   69419 kubeadm.go:394] duration metric: took 4m56.766239734s to StartCluster
	I0729 11:53:36.787983   69419 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.788071   69419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:53:36.790440   69419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.790747   69419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:53:36.790823   69419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:53:36.790914   69419 addons.go:69] Setting storage-provisioner=true in profile "no-preload-297799"
	I0729 11:53:36.790929   69419 addons.go:69] Setting default-storageclass=true in profile "no-preload-297799"
	I0729 11:53:36.790946   69419 addons.go:234] Setting addon storage-provisioner=true in "no-preload-297799"
	W0729 11:53:36.790956   69419 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:53:36.790970   69419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-297799"
	I0729 11:53:36.790963   69419 addons.go:69] Setting metrics-server=true in profile "no-preload-297799"
	I0729 11:53:36.790990   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791009   69419 addons.go:234] Setting addon metrics-server=true in "no-preload-297799"
	W0729 11:53:36.791023   69419 addons.go:243] addon metrics-server should already be in state true
	I0729 11:53:36.790938   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:53:36.791055   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791315   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791350   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791376   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791395   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791424   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791403   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.792400   69419 out.go:177] * Verifying Kubernetes components...
	I0729 11:53:36.793837   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:53:36.807811   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0729 11:53:36.807845   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0729 11:53:36.808292   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808347   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808844   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808863   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.808971   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808992   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.809204   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809364   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809708   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809727   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.809868   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809903   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.810196   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33633
	I0729 11:53:36.810602   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.811069   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.811085   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.811578   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.811789   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.815254   69419 addons.go:234] Setting addon default-storageclass=true in "no-preload-297799"
	W0729 11:53:36.815319   69419 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:53:36.815351   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.815722   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.815767   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.826661   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I0729 11:53:36.827259   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.827925   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.827947   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.828288   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.828475   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.829152   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0729 11:53:36.829483   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.829942   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.829954   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.830335   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.830448   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.830512   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.831779   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0729 11:53:36.832366   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.832499   69419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:53:36.832831   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.832843   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.833105   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.833659   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.833692   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.834047   69419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:36.834218   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:53:36.834243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.835105   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.837003   69419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:53:36.837668   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838105   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.838130   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838304   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:53:36.838322   69419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:53:36.838340   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.838347   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.838505   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.838661   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.838834   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.841306   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841724   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.841742   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841909   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.842081   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.842243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.842405   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.853959   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0729 11:53:36.854349   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.854825   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.854849   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.855184   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.855412   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.857073   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.857352   69419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:36.857363   69419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:53:36.857377   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.860376   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860804   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.860826   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860973   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.861121   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.861249   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.861352   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:37.000840   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:53:37.058535   69419 node_ready.go:35] waiting up to 6m0s for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069231   69419 node_ready.go:49] node "no-preload-297799" has status "Ready":"True"
	I0729 11:53:37.069260   69419 node_ready.go:38] duration metric: took 10.69136ms for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069272   69419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:37.080726   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:37.122837   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:37.154216   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:37.177797   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:53:37.177821   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:53:37.298520   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:37.298546   69419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:37.410911   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:37.410935   69419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:53:37.502799   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:38.337421   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.214547185s)
	I0729 11:53:38.337457   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.183203433s)
	I0729 11:53:38.337490   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337491   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337500   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337506   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337775   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337790   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337800   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337807   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337843   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.337844   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337865   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337873   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337880   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.338007   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338016   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338091   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338102   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338108   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.417894   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.417921   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.418225   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.418250   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.418272   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642279   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139432943s)
	I0729 11:53:38.642328   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642343   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642656   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642677   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642680   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642687   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642712   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642956   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642975   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642985   69419 addons.go:475] Verifying addon metrics-server=true in "no-preload-297799"
	I0729 11:53:38.642990   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.644958   69419 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 11:53:38.646417   69419 addons.go:510] duration metric: took 1.855596723s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 11:53:39.091531   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:41.587827   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.088096   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.586486   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.586510   69419 pod_ready.go:81] duration metric: took 7.505759998s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.586521   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591372   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.591394   69419 pod_ready.go:81] duration metric: took 4.865716ms for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591404   69419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596377   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.596401   69419 pod_ready.go:81] duration metric: took 4.988985ms for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596412   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603151   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.603176   69419 pod_ready.go:81] duration metric: took 6.75609ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603187   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609494   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.609514   69419 pod_ready.go:81] duration metric: took 6.319727ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609526   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984476   69419 pod_ready.go:92] pod "kube-proxy-blx4g" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.984505   69419 pod_ready.go:81] duration metric: took 374.971379ms for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984517   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385763   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:45.385792   69419 pod_ready.go:81] duration metric: took 401.266749ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385802   69419 pod_ready.go:38] duration metric: took 8.316518469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:45.385821   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:45.385887   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:45.404065   69419 api_server.go:72] duration metric: took 8.613282557s to wait for apiserver process to appear ...
	I0729 11:53:45.404093   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:45.404114   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:53:45.408027   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:53:45.408985   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:53:45.409011   69419 api_server.go:131] duration metric: took 4.91124ms to wait for apiserver health ...
	I0729 11:53:45.409020   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:45.587520   69419 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:45.587552   69419 system_pods.go:61] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.587556   69419 system_pods.go:61] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.587560   69419 system_pods.go:61] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.587563   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.587568   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.587571   69419 system_pods.go:61] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.587574   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.587580   69419 system_pods.go:61] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.587584   69419 system_pods.go:61] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.587590   69419 system_pods.go:74] duration metric: took 178.563924ms to wait for pod list to return data ...
	I0729 11:53:45.587596   69419 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:45.784611   69419 default_sa.go:45] found service account: "default"
	I0729 11:53:45.784640   69419 default_sa.go:55] duration metric: took 197.037896ms for default service account to be created ...
	I0729 11:53:45.784659   69419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:45.992937   69419 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:45.992973   69419 system_pods.go:89] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.992982   69419 system_pods.go:89] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.992990   69419 system_pods.go:89] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.992996   69419 system_pods.go:89] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.993003   69419 system_pods.go:89] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.993010   69419 system_pods.go:89] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.993017   69419 system_pods.go:89] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.993027   69419 system_pods.go:89] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.993037   69419 system_pods.go:89] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.993047   69419 system_pods.go:126] duration metric: took 208.382518ms to wait for k8s-apps to be running ...
	I0729 11:53:45.993059   69419 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:45.993109   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:46.012248   69419 system_svc.go:56] duration metric: took 19.180103ms WaitForService to wait for kubelet
	I0729 11:53:46.012284   69419 kubeadm.go:582] duration metric: took 9.221504322s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:46.012309   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:46.186674   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:46.186723   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:46.186736   69419 node_conditions.go:105] duration metric: took 174.422508ms to run NodePressure ...
	I0729 11:53:46.186747   69419 start.go:241] waiting for startup goroutines ...
	I0729 11:53:46.186753   69419 start.go:246] waiting for cluster config update ...
	I0729 11:53:46.186763   69419 start.go:255] writing updated cluster config ...
	I0729 11:53:46.187032   69419 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:46.236558   69419 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 11:53:46.239388   69419 out.go:177] * Done! kubectl is now configured to use "no-preload-297799" cluster and "default" namespace by default
	I0729 11:54:13.072567   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:13.072841   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:54:13.072861   70480 kubeadm.go:310] 
	I0729 11:54:13.072916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:54:13.072951   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:54:13.072959   70480 kubeadm.go:310] 
	I0729 11:54:13.072987   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:54:13.073016   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:54:13.073156   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:54:13.073181   70480 kubeadm.go:310] 
	I0729 11:54:13.073289   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:54:13.073328   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:54:13.073365   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:54:13.073373   70480 kubeadm.go:310] 
	I0729 11:54:13.073475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:54:13.073562   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:54:13.073572   70480 kubeadm.go:310] 
	I0729 11:54:13.073704   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:54:13.073835   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:54:13.073941   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:54:13.074115   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:54:13.074132   70480 kubeadm.go:310] 
	I0729 11:54:13.074287   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:54:13.074407   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:54:13.074523   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 11:54:13.074665   70480 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 11:54:13.074737   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:54:13.546511   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:54:13.564193   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:54:13.576259   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:54:13.576282   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:54:13.576325   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:54:13.586785   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:54:13.586846   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:54:13.597376   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:54:13.607259   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:54:13.607330   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:54:13.619236   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.630278   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:54:13.630341   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.640574   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:54:13.650526   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:54:13.650584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:54:13.660941   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:54:13.742767   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:54:13.742852   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:54:13.911296   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:54:13.911467   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:54:13.911607   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:54:14.123645   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:54:14.126464   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:54:14.126931   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:54:14.126995   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:54:14.127067   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:54:14.127146   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:54:14.127238   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:54:14.127328   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:54:14.127424   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:54:14.127506   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:54:14.129559   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:54:14.130588   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:54:14.130792   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:54:14.130931   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:54:14.373822   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:54:14.518174   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:54:14.850815   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:54:14.955918   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:54:14.979731   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:54:14.979884   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:54:14.979950   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:54:15.134480   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:54:15.137488   70480 out.go:204]   - Booting up control plane ...
	I0729 11:54:15.137635   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:54:15.150435   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:54:15.151842   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:54:15.152652   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:54:15.155022   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:54:55.157641   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:54:55.157771   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:55.158065   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:00.158857   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:00.159153   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:10.159857   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:30.160336   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:30.160554   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:56:10.159858   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159872   70480 kubeadm.go:310] 
	I0729 11:56:10.159916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:56:10.159968   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:56:10.159976   70480 kubeadm.go:310] 
	I0729 11:56:10.160004   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:56:10.160034   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:56:10.160140   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:56:10.160160   70480 kubeadm.go:310] 
	I0729 11:56:10.160286   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:56:10.160348   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:56:10.160378   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:56:10.160384   70480 kubeadm.go:310] 
	I0729 11:56:10.160475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:56:10.160547   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:56:10.160556   70480 kubeadm.go:310] 
	I0729 11:56:10.160652   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:56:10.160756   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:56:10.160878   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:56:10.160976   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:56:10.160985   70480 kubeadm.go:310] 
	I0729 11:56:10.162126   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:56:10.162224   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:56:10.162399   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 11:56:10.162415   70480 kubeadm.go:394] duration metric: took 7m59.00013135s to StartCluster
	I0729 11:56:10.162473   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:56:10.162606   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:56:10.208183   70480 cri.go:89] found id: ""
	I0729 11:56:10.208206   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.208214   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:56:10.208219   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:56:10.208275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:56:10.256265   70480 cri.go:89] found id: ""
	I0729 11:56:10.256293   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.256304   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:56:10.256445   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:56:10.256515   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:56:10.296625   70480 cri.go:89] found id: ""
	I0729 11:56:10.296668   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.296688   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:56:10.296704   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:56:10.296791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:56:10.333771   70480 cri.go:89] found id: ""
	I0729 11:56:10.333797   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.333824   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:56:10.333830   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:56:10.333887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:56:10.371746   70480 cri.go:89] found id: ""
	I0729 11:56:10.371772   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.371782   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:56:10.371789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:56:10.371850   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:56:10.408988   70480 cri.go:89] found id: ""
	I0729 11:56:10.409018   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.409028   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:56:10.409036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:56:10.409087   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:56:10.448706   70480 cri.go:89] found id: ""
	I0729 11:56:10.448731   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.448740   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:56:10.448749   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:56:10.448808   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:56:10.485549   70480 cri.go:89] found id: ""
	I0729 11:56:10.485577   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.485585   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:56:10.485594   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:56:10.485609   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:56:10.536953   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:56:10.536989   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:56:10.554870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:56:10.554908   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:56:10.675935   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:56:10.675966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:56:10.675983   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:56:10.792820   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:56:10.792858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 11:56:10.833493   70480 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 11:56:10.833537   70480 out.go:239] * 
	W0729 11:56:10.833616   70480 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.833644   70480 out.go:239] * 
	W0729 11:56:10.834607   70480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:56:10.838465   70480 out.go:177] 
	W0729 11:56:10.840213   70480 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.840257   70480 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 11:56:10.840282   70480 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 11:56:10.841869   70480 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.064802562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f,PodSandboxId:a84509d4ea22ed5052fe1d0bba21198a0058ca70263b502a33856a0dd6a871cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253960489012354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fea7bc2-554e-4fe9-b2af-c4e340e85c18,},Annotations:map[string]string{io.kubernetes.container.hash: 173634fd,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83,PodSandboxId:1719ec1d1a9f3e4d89657e2f7cd2a822b03afa2bf27b917efa9994e232ea4c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959937360199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298c2d3b-8a1e-4146-987a-f9c1eff6f92c,},Annotations:map[string]string{io.kubernetes.container.hash: f9f333a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a,PodSandboxId:4a0ff83e61ae46c9e12c94969c7c7c03847e3790183c83520d3a23444ca49dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959819109768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6md2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
7472eb3-a941-4ff9-a0af-0ce42d604318,},Annotations:map[string]string{io.kubernetes.container.hash: a22b6753,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f,PodSandboxId:0ab73f6229550e0624eac56c0de64cc1ea26d20044a58cc3bc0d8d82b22a47ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722253959304098231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ch48n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4ae37b05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b,PodSandboxId:8a11380193b6bfbab850dd2c4d95a9fd38b39362da0cf4dc2271399a521eda55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253939401272825,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302d27b116b4d52c090d34d6a9d4555a,},Annotations:map[string]string{io.kubernetes.container.hash: 30da0c39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3,PodSandboxId:abd79b4ec7ce8dd784f4ffe888a97f29d97a7f6af02e6573dbd1c303094bed64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253939368763724,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ae99724a0457d2e75a03486422f3aa2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc,PodSandboxId:ba91a3458c7d79bc2e997337bf9878eff517e736e83bc3da27dd3497fd244534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253939305995266,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88,PodSandboxId:214b0c6b2009c39d134d523a138c8bf5b7bc9e5f4cd418999feeceeac32901c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253939281984009,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60b667741fe404f7fea63d7874436bf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8739168a3bbb166d3dc6156e969baf4e9e5e7be3c280dd20d80ba9b9ae71e2e4,PodSandboxId:abdd90a62be0421ae8a6ac0003705ed4ca87415a46b066f821c119287420051d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253651263271287,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00c3b3c4-d8ab-45a0-925a-c15d3613c8fe name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.090465196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2262b44-4131-4e69-b7b5-dacaf93953e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.090561675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2262b44-4131-4e69-b7b5-dacaf93953e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.091544394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f,PodSandboxId:a84509d4ea22ed5052fe1d0bba21198a0058ca70263b502a33856a0dd6a871cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253960489012354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fea7bc2-554e-4fe9-b2af-c4e340e85c18,},Annotations:map[string]string{io.kubernetes.container.hash: 173634fd,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83,PodSandboxId:1719ec1d1a9f3e4d89657e2f7cd2a822b03afa2bf27b917efa9994e232ea4c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959937360199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298c2d3b-8a1e-4146-987a-f9c1eff6f92c,},Annotations:map[string]string{io.kubernetes.container.hash: f9f333a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a,PodSandboxId:4a0ff83e61ae46c9e12c94969c7c7c03847e3790183c83520d3a23444ca49dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959819109768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6md2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
7472eb3-a941-4ff9-a0af-0ce42d604318,},Annotations:map[string]string{io.kubernetes.container.hash: a22b6753,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f,PodSandboxId:0ab73f6229550e0624eac56c0de64cc1ea26d20044a58cc3bc0d8d82b22a47ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722253959304098231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ch48n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4ae37b05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b,PodSandboxId:8a11380193b6bfbab850dd2c4d95a9fd38b39362da0cf4dc2271399a521eda55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253939401272825,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302d27b116b4d52c090d34d6a9d4555a,},Annotations:map[string]string{io.kubernetes.container.hash: 30da0c39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3,PodSandboxId:abd79b4ec7ce8dd784f4ffe888a97f29d97a7f6af02e6573dbd1c303094bed64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253939368763724,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ae99724a0457d2e75a03486422f3aa2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc,PodSandboxId:ba91a3458c7d79bc2e997337bf9878eff517e736e83bc3da27dd3497fd244534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253939305995266,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88,PodSandboxId:214b0c6b2009c39d134d523a138c8bf5b7bc9e5f4cd418999feeceeac32901c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253939281984009,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60b667741fe404f7fea63d7874436bf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8739168a3bbb166d3dc6156e969baf4e9e5e7be3c280dd20d80ba9b9ae71e2e4,PodSandboxId:abdd90a62be0421ae8a6ac0003705ed4ca87415a46b066f821c119287420051d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253651263271287,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2262b44-4131-4e69-b7b5-dacaf93953e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.092660933Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=40a13855-4168-4221-a363-1b138f17122a name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.092878195Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722253960530487579,StartedAt:1722253960563559174,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fea7bc2-554e-4fe9-b2af-c4e340e85c18,},Annotations:map[string]string{io.kubernetes.container.hash: 173634fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/2fea7bc2-554e-4fe9-b2af-c4e340e85c18/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/2fea7bc2-554e-4fe9-b2af-c4e340e85c18/containers/storage-provisioner/07b4309e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/2fea7bc2-554e-4fe9-b2af-c4e340e85c18/volumes/kubernetes.io~projected/kube-api-access-jvmq6,Readonly:true,SelinuxRelabel:fal
se,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_2fea7bc2-554e-4fe9-b2af-c4e340e85c18/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=40a13855-4168-4221-a363-1b138f17122a name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.093561353Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=62a76d70-ae0a-44a0-97fc-61df96779a79 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.093615448Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7cf35d7f-1f24-4330-9cfc-75a4e189482c name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.093776654Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722253960069906989,StartedAt:1722253960127015458,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298c2d3b-8a1e-4146-987a-f9c1eff6f92c,},Annotations:map[string]string{io.kubernetes.container.hash: f9f333a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/298c2d3b-8a1e-4146-987a-f9c1eff6f92c/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/298c2d3b-8a1e-4146-987a-f9c1eff6f92c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/298c2d3b-8a1e-4146-987a-f9c1eff6f92c/containers/coredns/c103a9bc,Readonly:false,SelinuxRelabel:false,Propagatio
n:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/298c2d3b-8a1e-4146-987a-f9c1eff6f92c/volumes/kubernetes.io~projected/kube-api-access-mdp66,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-rlhzt_298c2d3b-8a1e-4146-987a-f9c1eff6f92c/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7cf35d7f-1f24-4330-9cfc-75a4e189482c name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.093946823Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a84509d4ea22ed5052fe1d0bba21198a0058ca70263b502a33856a0dd6a871cc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2fea7bc2-554e-4fe9-b2af-c4e340e85c18,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253960341759366,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fea7bc2-554e-4fe9-b2af-c4e340e85c18,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T11:52:40.029665085Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2ee17281a25eab2366832a3ae6b98fe9418663be4ebf88a3e8dd6d6c2b0e82c,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-gxczz,Uid:096f1de4-e064-42bc-8a16-aa08320addb4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253960106214580,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-gxczz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096f1de4-e064-42bc-8a16-aa08320addb
4,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:52:39.781785919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0ab73f6229550e0624eac56c0de64cc1ea26d20044a58cc3bc0d8d82b22a47ba,Metadata:&PodSandboxMetadata{Name:kube-proxy-ch48n,Uid:68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253958947608297,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ch48n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:52:38.038283920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a0ff83e61ae46c9e12c94969c7c7c03847e3790183c83520d3a23444ca49dc6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-6md2j,Ui
d:37472eb3-a941-4ff9-a0af-0ce42d604318,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253958827592749,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-6md2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37472eb3-a941-4ff9-a0af-0ce42d604318,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:52:38.515118729Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1719ec1d1a9f3e4d89657e2f7cd2a822b03afa2bf27b917efa9994e232ea4c93,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rlhzt,Uid:298c2d3b-8a1e-4146-987a-f9c1eff6f92c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253958810653168,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298c2d3b-8a1e-4146-987a-f9c1eff6f92c,k8s-app: kube-dns,pod-templa
te-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:52:38.488537858Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a11380193b6bfbab850dd2c4d95a9fd38b39362da0cf4dc2271399a521eda55,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-731235,Uid:302d27b116b4d52c090d34d6a9d4555a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253939117417354,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302d27b116b4d52c090d34d6a9d4555a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.202:2379,kubernetes.io/config.hash: 302d27b116b4d52c090d34d6a9d4555a,kubernetes.io/config.seen: 2024-07-29T11:52:18.644036174Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ba91a3458c7d79bc2e997337bf9878eff517e736e83bc3da27dd3497fd244534,
Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-731235,Uid:a60b2b0d2997fb059777c19017f4b354,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253939115526546,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.202:8443,kubernetes.io/config.hash: a60b2b0d2997fb059777c19017f4b354,kubernetes.io/config.seen: 2024-07-29T11:52:18.644037640Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:214b0c6b2009c39d134d523a138c8bf5b7bc9e5f4cd418999feeceeac32901c9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-731235,Uid:f60b667741fe404f7fea63d7874436bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253939083300491,Labels:m
ap[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60b667741fe404f7fea63d7874436bf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f60b667741fe404f7fea63d7874436bf,kubernetes.io/config.seen: 2024-07-29T11:52:18.644030887Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:abd79b4ec7ce8dd784f4ffe888a97f29d97a7f6af02e6573dbd1c303094bed64,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-731235,Uid:8ae99724a0457d2e75a03486422f3aa2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253939081619395,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ae99724a0457d2e75a03486422f3aa2,tier: control-plane,},Annotations:map[str
ing]string{kubernetes.io/config.hash: 8ae99724a0457d2e75a03486422f3aa2,kubernetes.io/config.seen: 2024-07-29T11:52:18.644034868Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:abdd90a62be0421ae8a6ac0003705ed4ca87415a46b066f821c119287420051d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-731235,Uid:a60b2b0d2997fb059777c19017f4b354,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722253651059339136,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.202:8443,kubernetes.io/config.hash: a60b2b0d2997fb059777c19017f4b354,kubernetes.io/config.seen: 2024-07-29T11:47:30.546526714Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=62a76d70-ae0a-44a0-97fc-61df96779a79 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.094958052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28182ba6-559d-4623-ae85-4dbfc2206b44 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.095090743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28182ba6-559d-4623-ae85-4dbfc2206b44 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.095241288Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=bbda8825-a56b-4e6e-a075-76d3a23bebe7 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.095369033Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722253959939902171,StartedAt:1722253960014128242,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6md2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37472eb3-a941-4ff9-a0af-0ce42d604318,},Annotations:map[string]string{io.kubernetes.container.hash: a22b6753,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/37472eb3-a941-4ff9-a0af-0ce42d604318/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/37472eb3-a941-4ff9-a0af-0ce42d604318/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/37472eb3-a941-4ff9-a0af-0ce42d604318/containers/coredns/2d5a4e48,Readonly:false,SelinuxRelabel:false,Propagatio
n:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/37472eb3-a941-4ff9-a0af-0ce42d604318/volumes/kubernetes.io~projected/kube-api-access-4wv7j,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-6md2j_37472eb3-a941-4ff9-a0af-0ce42d604318/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=bbda8825-a56b-4e6e-a075-76d3a23bebe7 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.095535515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f,PodSandboxId:a84509d4ea22ed5052fe1d0bba21198a0058ca70263b502a33856a0dd6a871cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253960489012354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fea7bc2-554e-4fe9-b2af-c4e340e85c18,},Annotations:map[string]string{io.kubernetes.container.hash: 173634fd,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83,PodSandboxId:1719ec1d1a9f3e4d89657e2f7cd2a822b03afa2bf27b917efa9994e232ea4c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959937360199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298c2d3b-8a1e-4146-987a-f9c1eff6f92c,},Annotations:map[string]string{io.kubernetes.container.hash: f9f333a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a,PodSandboxId:4a0ff83e61ae46c9e12c94969c7c7c03847e3790183c83520d3a23444ca49dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959819109768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6md2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
7472eb3-a941-4ff9-a0af-0ce42d604318,},Annotations:map[string]string{io.kubernetes.container.hash: a22b6753,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f,PodSandboxId:0ab73f6229550e0624eac56c0de64cc1ea26d20044a58cc3bc0d8d82b22a47ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722253959304098231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ch48n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4ae37b05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b,PodSandboxId:8a11380193b6bfbab850dd2c4d95a9fd38b39362da0cf4dc2271399a521eda55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253939401272825,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302d27b116b4d52c090d34d6a9d4555a,},Annotations:map[string]string{io.kubernetes.container.hash: 30da0c39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3,PodSandboxId:abd79b4ec7ce8dd784f4ffe888a97f29d97a7f6af02e6573dbd1c303094bed64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253939368763724,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ae99724a0457d2e75a03486422f3aa2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc,PodSandboxId:ba91a3458c7d79bc2e997337bf9878eff517e736e83bc3da27dd3497fd244534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253939305995266,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88,PodSandboxId:214b0c6b2009c39d134d523a138c8bf5b7bc9e5f4cd418999feeceeac32901c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253939281984009,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60b667741fe404f7fea63d7874436bf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8739168a3bbb166d3dc6156e969baf4e9e5e7be3c280dd20d80ba9b9ae71e2e4,PodSandboxId:abdd90a62be0421ae8a6ac0003705ed4ca87415a46b066f821c119287420051d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253651263271287,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28182ba6-559d-4623-ae85-4dbfc2206b44 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.096490747Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0d2a5a4f-716c-4356-a941-ca22bb5f4ef1 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.096638613Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722253959460388750,StartedAt:1722253959547252727,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ch48n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4ae37b05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/68896b36-6aa0-4dcc-ad3a-74573aa1c3ec/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/68896b36-6aa0-4dcc-ad3a-74573aa1c3ec/containers/kube-proxy/299eb2fb,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var
/lib/kubelet/pods/68896b36-6aa0-4dcc-ad3a-74573aa1c3ec/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/68896b36-6aa0-4dcc-ad3a-74573aa1c3ec/volumes/kubernetes.io~projected/kube-api-access-8tdpt,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-ch48n_68896b36-6aa0-4dcc-ad3a-74573aa1c3ec/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-
collector/interceptors.go:74" id=0d2a5a4f-716c-4356-a941-ca22bb5f4ef1 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.097546993Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d26a29fc-1189-4db5-a6bb-85c84b309149 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.097684120Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1722253939494556769,StartedAt:1722253939623942162,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302d27b116b4d52c090d34d6a9d4555a,},Annotations:map[string]string{io.kubernetes.container.hash: 30da0c39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/302d27b116b4d52c090d34d6a9d4555a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/302d27b116b4d52c090d34d6a9d4555a/containers/etcd/0971d44b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etc
d-embed-certs-731235_302d27b116b4d52c090d34d6a9d4555a/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d26a29fc-1189-4db5-a6bb-85c84b309149 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.098211886Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3,Verbose:false,}" file="otel-collector/interceptors.go:62" id=e5d8f441-20fd-48ff-9056-3586acbe1265 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.098306713Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1722253939456935745,StartedAt:1722253939593959283,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ae99724a0457d2e75a03486422f3aa2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8ae99724a0457d2e75a03486422f3aa2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8ae99724a0457d2e75a03486422f3aa2/containers/kube-scheduler/b08c67c5,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-embed-certs-731235_8ae99724a0457d2e75a03486422f3aa2/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{Cp
uPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e5d8f441-20fd-48ff-9056-3586acbe1265 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.098941994Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc,Verbose:false,}" file="otel-collector/interceptors.go:62" id=3c0a4a05-274e-4f8e-a7d2-b23238687d28 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.099116323Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1722253939412552059,StartedAt:1722253939507996682,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a60b2b0d2997fb059777c19017f4b354/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a60b2b0d2997fb059777c19017f4b354/containers/kube-apiserver/ba2cae0a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Contai
nerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-731235_a60b2b0d2997fb059777c19017f4b354/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3c0a4a05-274e-4f8e-a7d2-b23238687d28 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.100333386Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ea8a70a4-7209-4c60-ae0d-61c5be8d016e name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:01:45 embed-certs-731235 crio[720]: time="2024-07-29 12:01:45.100537032Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1722253939337660557,StartedAt:1722253939461944652,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60b667741fe404f7fea63d7874436bf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f60b667741fe404f7fea63d7874436bf/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f60b667741fe404f7fea63d7874436bf/containers/kube-controller-manager/ae3bb4a7,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVA
TE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-embed-certs-731235_f60b667741fe404f7fea63d7874436bf/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,Cpus
etMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ea8a70a4-7209-4c60-ae0d-61c5be8d016e name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17ed1f9cdc5c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a84509d4ea22e       storage-provisioner
	c504bd9a6517f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1719ec1d1a9f3       coredns-7db6d8ff4d-rlhzt
	f159ded4e861d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   4a0ff83e61ae4       coredns-7db6d8ff4d-6md2j
	540f29562a87f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   0ab73f6229550       kube-proxy-ch48n
	292332f55fd85       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   8a11380193b6b       etcd-embed-certs-731235
	4bac7e946a3aa       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   abd79b4ec7ce8       kube-scheduler-embed-certs-731235
	f60dbe60770ee       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   ba91a3458c7d7       kube-apiserver-embed-certs-731235
	afdc1f5fc4c43       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   214b0c6b2009c       kube-controller-manager-embed-certs-731235
	8739168a3bbb1       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   14 minutes ago      Exited              kube-apiserver            1                   abdd90a62be04       kube-apiserver-embed-certs-731235
	
	
	==> coredns [c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-731235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-731235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=embed-certs-731235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_52_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:52:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-731235
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:01:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:57:51 +0000   Mon, 29 Jul 2024 11:52:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:57:51 +0000   Mon, 29 Jul 2024 11:52:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:57:51 +0000   Mon, 29 Jul 2024 11:52:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:57:51 +0000   Mon, 29 Jul 2024 11:52:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.202
	  Hostname:    embed-certs-731235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e72225a70aa443afbe796c8a6ba51195
	  System UUID:                e72225a7-0aa4-43af-be79-6c8a6ba51195
	  Boot ID:                    f81e00dd-ec80-4e0e-b189-1c01131c4473
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-6md2j                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-rlhzt                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-731235                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-731235             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-731235    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-ch48n                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-731235             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-gxczz               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node embed-certs-731235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node embed-certs-731235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node embed-certs-731235 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node embed-certs-731235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node embed-certs-731235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node embed-certs-731235 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node embed-certs-731235 event: Registered Node embed-certs-731235 in Controller
	
	
	==> dmesg <==
	[  +0.040514] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.823952] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.675782] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.576972] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.031640] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.055931] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065827] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.199495] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.130976] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.320203] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +4.450055] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.057915] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.589649] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +4.592807] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.320400] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.745588] kauditd_printk_skb: 27 callbacks suppressed
	[Jul29 11:52] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.681486] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +4.710216] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.869378] systemd-fstab-generator[3877]: Ignoring "noauto" option for root device
	[ +13.882153] systemd-fstab-generator[4070]: Ignoring "noauto" option for root device
	[  +0.116175] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 11:53] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b] <==
	{"level":"info","ts":"2024-07-29T11:52:19.733147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 switched to configuration voters=(15795009111912640435)"}
	{"level":"info","ts":"2024-07-29T11:52:19.733478Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"834577a0a9e3ba88","local-member-id":"db33251a0b9c6fb3","added-peer-id":"db33251a0b9c6fb3","added-peer-peer-urls":["https://192.168.61.202:2380"]}
	{"level":"info","ts":"2024-07-29T11:52:19.761705Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T11:52:19.763943Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.202:2380"}
	{"level":"info","ts":"2024-07-29T11:52:19.770361Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.202:2380"}
	{"level":"info","ts":"2024-07-29T11:52:19.769148Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"db33251a0b9c6fb3","initial-advertise-peer-urls":["https://192.168.61.202:2380"],"listen-peer-urls":["https://192.168.61.202:2380"],"advertise-client-urls":["https://192.168.61.202:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.202:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:52:19.769185Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T11:52:19.80091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T11:52:19.800965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T11:52:19.801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 received MsgPreVoteResp from db33251a0b9c6fb3 at term 1"}
	{"level":"info","ts":"2024-07-29T11:52:19.801014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:19.801019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 received MsgVoteResp from db33251a0b9c6fb3 at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:19.801027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:19.801039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db33251a0b9c6fb3 elected leader db33251a0b9c6fb3 at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:19.80514Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"db33251a0b9c6fb3","local-member-attributes":"{Name:embed-certs-731235 ClientURLs:[https://192.168.61.202:2379]}","request-path":"/0/members/db33251a0b9c6fb3/attributes","cluster-id":"834577a0a9e3ba88","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:52:19.80519Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:52:19.805583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:52:19.811001Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:19.81691Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:52:19.816948Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:52:19.816987Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"834577a0a9e3ba88","local-member-id":"db33251a0b9c6fb3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:19.817045Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:19.817063Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:19.821478Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.202:2379"}
	{"level":"info","ts":"2024-07-29T11:52:19.826422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:01:45 up 14 min,  0 users,  load average: 0.25, 0.26, 0.19
	Linux embed-certs-731235 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8739168a3bbb166d3dc6156e969baf4e9e5e7be3c280dd20d80ba9b9ae71e2e4] <==
	W0729 11:52:11.532156       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.542040       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.543515       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.545819       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.557092       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.558570       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.577264       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.605779       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.628984       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.629063       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.629323       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.637316       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.640044       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.800628       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.812038       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.851661       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.892598       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.111441       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.144258       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.189769       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.194611       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.293623       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.302989       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.620188       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:15.221367       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc] <==
	I0729 11:55:40.584066       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 11:57:22.119410       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 11:57:22.119744       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 11:57:23.121133       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 11:57:23.121180       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 11:57:23.121197       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 11:57:23.121137       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 11:57:23.121277       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 11:57:23.122428       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 11:58:23.121927       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 11:58:23.122135       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 11:58:23.122176       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 11:58:23.123125       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 11:58:23.123198       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 11:58:23.123210       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:00:23.122589       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:00:23.122710       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 12:00:23.122725       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:00:23.123972       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:00:23.124120       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 12:00:23.124168       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88] <==
	I0729 11:56:08.082566       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:56:37.546702       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:56:38.093256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:57:07.552087       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:57:08.102309       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:57:37.557667       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:57:38.111619       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:58:07.564494       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:58:08.120059       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 11:58:22.915202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="385.478µs"
	I0729 11:58:34.911998       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="106.101µs"
	E0729 11:58:37.570082       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:58:38.129741       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:59:07.575138       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:59:08.138238       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:59:37.581414       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:59:38.147984       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:00:07.586216       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:00:08.157049       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:00:37.592184       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:00:38.166122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:01:07.597746       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:01:08.175955       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:01:37.605454       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:01:38.186573       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f] <==
	I0729 11:52:40.068334       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:52:40.163310       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.202"]
	I0729 11:52:40.406132       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:52:40.406229       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:52:40.406262       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:52:40.414638       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:52:40.414896       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:52:40.414930       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:52:40.417131       1 config.go:192] "Starting service config controller"
	I0729 11:52:40.417227       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:52:40.417324       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:52:40.417372       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:52:40.418379       1 config.go:319] "Starting node config controller"
	I0729 11:52:40.418480       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:52:40.518060       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:52:40.518135       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:52:40.518593       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3] <==
	W0729 11:52:22.149427       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:52:22.149455       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:52:22.150970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:52:22.151070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 11:52:22.151284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:52:22.151384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:52:22.151610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:52:22.154028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 11:52:22.154243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:52:22.154336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 11:52:23.083013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:23.083062       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:23.101215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:23.101785       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:23.207733       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:52:23.208136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:52:23.235154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:52:23.235207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 11:52:23.295143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:52:23.295296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:52:23.310620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:23.310744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:23.380757       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:52:23.380994       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 11:52:25.733530       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 11:59:24 embed-certs-731235 kubelet[3884]: E0729 11:59:24.932736    3884 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:59:24 embed-certs-731235 kubelet[3884]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:59:24 embed-certs-731235 kubelet[3884]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:59:24 embed-certs-731235 kubelet[3884]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:59:24 embed-certs-731235 kubelet[3884]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:59:36 embed-certs-731235 kubelet[3884]: E0729 11:59:36.895580    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 11:59:49 embed-certs-731235 kubelet[3884]: E0729 11:59:49.894913    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:00:03 embed-certs-731235 kubelet[3884]: E0729 12:00:03.894947    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:00:15 embed-certs-731235 kubelet[3884]: E0729 12:00:15.895208    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:00:24 embed-certs-731235 kubelet[3884]: E0729 12:00:24.930478    3884 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:00:24 embed-certs-731235 kubelet[3884]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:00:24 embed-certs-731235 kubelet[3884]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:00:24 embed-certs-731235 kubelet[3884]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:00:24 embed-certs-731235 kubelet[3884]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:00:30 embed-certs-731235 kubelet[3884]: E0729 12:00:30.895292    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:00:45 embed-certs-731235 kubelet[3884]: E0729 12:00:45.895347    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:00:59 embed-certs-731235 kubelet[3884]: E0729 12:00:59.895241    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:01:10 embed-certs-731235 kubelet[3884]: E0729 12:01:10.896966    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:01:24 embed-certs-731235 kubelet[3884]: E0729 12:01:24.897399    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:01:24 embed-certs-731235 kubelet[3884]: E0729 12:01:24.930489    3884 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:01:24 embed-certs-731235 kubelet[3884]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:01:24 embed-certs-731235 kubelet[3884]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:01:24 embed-certs-731235 kubelet[3884]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:01:24 embed-certs-731235 kubelet[3884]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:01:37 embed-certs-731235 kubelet[3884]: E0729 12:01:37.895489    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	
	
	==> storage-provisioner [17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f] <==
	I0729 11:52:40.592955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:52:40.603424       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:52:40.603709       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:52:40.616496       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:52:40.617159       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-731235_3681e83f-dde9-4a42-b5d3-b716207010a5!
	I0729 11:52:40.617525       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a277a6e-a739-4e1c-bf40-40fb6d89633b", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-731235_3681e83f-dde9-4a42-b5d3-b716207010a5 became leader
	I0729 11:52:40.718096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-731235_3681e83f-dde9-4a42-b5d3-b716207010a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-731235 -n embed-certs-731235
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-731235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-gxczz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-731235 describe pod metrics-server-569cc877fc-gxczz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-731235 describe pod metrics-server-569cc877fc-gxczz: exit status 1 (63.976455ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-gxczz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-731235 describe pod metrics-server-569cc877fc-gxczz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 11:53:19.769511   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 12:02:04.535025785 +0000 UTC m=+6102.618754263
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-754486 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-754486 logs -n 25: (2.175348976s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184479 sudo cat                              | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo find                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo crio                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184479                                       | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-574387 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | disable-driver-mounts-574387                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:41 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-297799             | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-297799                                   | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-731235            | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC | 29 Jul 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-754486  | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC | 29 Jul 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-297799                  | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-297799 --memory=2200                     | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC | 29 Jul 24 11:53 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-188043        | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-731235                 | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC | 29 Jul 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754486       | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:53 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-188043             | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:44:35.218497   70480 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:44:35.218591   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218595   70480 out.go:304] Setting ErrFile to fd 2...
	I0729 11:44:35.218599   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218821   70480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:44:35.219357   70480 out.go:298] Setting JSON to false
	I0729 11:44:35.220249   70480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5221,"bootTime":1722248254,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:44:35.220304   70480 start.go:139] virtualization: kvm guest
	I0729 11:44:35.222361   70480 out.go:177] * [old-k8s-version-188043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:44:35.223849   70480 notify.go:220] Checking for updates...
	I0729 11:44:35.223857   70480 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:44:35.225068   70480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:44:35.226167   70480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:44:35.227448   70480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:44:35.228516   70480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:44:35.229771   70480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:44:35.231218   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:44:35.231601   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.231634   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.246457   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0729 11:44:35.246861   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.247335   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.247371   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.247712   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.247899   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.249642   70480 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 11:44:35.250805   70480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:44:35.251114   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.251152   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.265900   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0729 11:44:35.266309   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.266767   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.266794   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.267099   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.267293   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.302182   70480 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:44:35.303575   70480 start.go:297] selected driver: kvm2
	I0729 11:44:35.303593   70480 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.303689   70480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:44:35.304334   70480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.304396   70480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:44:35.319674   70480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:44:35.320023   70480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:44:35.320054   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:44:35.320061   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:44:35.320111   70480 start.go:340] cluster config:
	{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.320210   70480 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.322089   70480 out.go:177] * Starting "old-k8s-version-188043" primary control-plane node in "old-k8s-version-188043" cluster
	I0729 11:44:38.643004   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:35.323173   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:44:35.323208   70480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 11:44:35.323221   70480 cache.go:56] Caching tarball of preloaded images
	I0729 11:44:35.323282   70480 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:44:35.323291   70480 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 11:44:35.323390   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:44:35.323557   70480 start.go:360] acquireMachinesLock for old-k8s-version-188043: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:44:41.714983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:47.794983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:50.867015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:56.946962   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:00.019017   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:06.099000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:09.171008   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:15.250989   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:18.322956   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:24.403015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:27.474951   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:33.554944   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:36.627002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:42.706993   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:45.779000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:51.858998   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:54.931013   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:01.011021   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:04.082938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:10.162988   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:13.235043   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:19.314994   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:22.386953   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:28.467078   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:31.539011   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:37.618990   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:40.690995   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:46.770999   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:49.842938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:55.923002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:58.994960   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:47:01.999190   69907 start.go:364] duration metric: took 3m42.920247555s to acquireMachinesLock for "embed-certs-731235"
	I0729 11:47:01.999237   69907 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:01.999244   69907 fix.go:54] fixHost starting: 
	I0729 11:47:01.999548   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:01.999574   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:02.014481   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I0729 11:47:02.014934   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:02.015374   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:47:02.015392   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:02.015726   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:02.015911   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:02.016062   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:47:02.017570   69907 fix.go:112] recreateIfNeeded on embed-certs-731235: state=Stopped err=<nil>
	I0729 11:47:02.017606   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	W0729 11:47:02.017758   69907 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:02.020459   69907 out.go:177] * Restarting existing kvm2 VM for "embed-certs-731235" ...
	I0729 11:47:02.021770   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Start
	I0729 11:47:02.021904   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring networks are active...
	I0729 11:47:02.022551   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network default is active
	I0729 11:47:02.022943   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network mk-embed-certs-731235 is active
	I0729 11:47:02.023347   69907 main.go:141] libmachine: (embed-certs-731235) Getting domain xml...
	I0729 11:47:02.023972   69907 main.go:141] libmachine: (embed-certs-731235) Creating domain...
	I0729 11:47:03.233906   69907 main.go:141] libmachine: (embed-certs-731235) Waiting to get IP...
	I0729 11:47:03.234807   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.235200   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.235266   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.235191   70997 retry.go:31] will retry after 267.737911ms: waiting for machine to come up
	I0729 11:47:03.504861   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.505460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.505485   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.505418   70997 retry.go:31] will retry after 246.310337ms: waiting for machine to come up
	I0729 11:47:03.753068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.753558   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.753587   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.753520   70997 retry.go:31] will retry after 374.497339ms: waiting for machine to come up
	I0729 11:47:01.996514   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:01.996575   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.996873   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:47:01.996897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.997094   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:47:01.999070   69419 machine.go:97] duration metric: took 4m37.426222817s to provisionDockerMachine
	I0729 11:47:01.999113   69419 fix.go:56] duration metric: took 4m37.448019985s for fixHost
	I0729 11:47:01.999122   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 4m37.448042995s
	W0729 11:47:01.999140   69419 start.go:714] error starting host: provision: host is not running
	W0729 11:47:01.999247   69419 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 11:47:01.999257   69419 start.go:729] Will try again in 5 seconds ...
	I0729 11:47:04.130170   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.130603   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.130625   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.130557   70997 retry.go:31] will retry after 500.810762ms: waiting for machine to come up
	I0729 11:47:04.632773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.633142   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.633196   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.633094   70997 retry.go:31] will retry after 499.805121ms: waiting for machine to come up
	I0729 11:47:05.135101   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.135685   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.135714   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.135610   70997 retry.go:31] will retry after 713.805425ms: waiting for machine to come up
	I0729 11:47:05.850525   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.850950   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.850979   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.850918   70997 retry.go:31] will retry after 940.40593ms: waiting for machine to come up
	I0729 11:47:06.792982   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:06.793406   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:06.793433   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:06.793344   70997 retry.go:31] will retry after 1.216752167s: waiting for machine to come up
	I0729 11:47:08.012264   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:08.012748   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:08.012773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:08.012692   70997 retry.go:31] will retry after 1.729849311s: waiting for machine to come up
	I0729 11:47:07.000812   69419 start.go:360] acquireMachinesLock for no-preload-297799: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:47:09.743735   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:09.744125   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:09.744144   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:09.744101   70997 retry.go:31] will retry after 2.251271574s: waiting for machine to come up
	I0729 11:47:11.998663   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:11.999213   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:11.999255   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:11.999163   70997 retry.go:31] will retry after 2.400718693s: waiting for machine to come up
	I0729 11:47:14.401005   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:14.401419   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:14.401442   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:14.401352   70997 retry.go:31] will retry after 3.073847413s: waiting for machine to come up
	I0729 11:47:17.477026   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:17.477424   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:17.477460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:17.477352   70997 retry.go:31] will retry after 3.28522497s: waiting for machine to come up
	I0729 11:47:22.076091   70231 start.go:364] duration metric: took 3m11.794715554s to acquireMachinesLock for "default-k8s-diff-port-754486"
	I0729 11:47:22.076162   70231 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:22.076177   70231 fix.go:54] fixHost starting: 
	I0729 11:47:22.076605   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:22.076644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:22.096370   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45159
	I0729 11:47:22.096731   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:22.097267   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:47:22.097296   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:22.097603   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:22.097812   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:22.097983   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:47:22.099583   70231 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754486: state=Stopped err=<nil>
	I0729 11:47:22.099607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	W0729 11:47:22.099762   70231 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:22.101982   70231 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754486" ...
	I0729 11:47:20.766989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767519   69907 main.go:141] libmachine: (embed-certs-731235) Found IP for machine: 192.168.61.202
	I0729 11:47:20.767544   69907 main.go:141] libmachine: (embed-certs-731235) Reserving static IP address...
	I0729 11:47:20.767560   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has current primary IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767996   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.768025   69907 main.go:141] libmachine: (embed-certs-731235) DBG | skip adding static IP to network mk-embed-certs-731235 - found existing host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"}
	I0729 11:47:20.768043   69907 main.go:141] libmachine: (embed-certs-731235) Reserved static IP address: 192.168.61.202
	I0729 11:47:20.768060   69907 main.go:141] libmachine: (embed-certs-731235) Waiting for SSH to be available...
	I0729 11:47:20.768068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Getting to WaitForSSH function...
	I0729 11:47:20.770325   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770639   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.770667   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770863   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH client type: external
	I0729 11:47:20.770894   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa (-rw-------)
	I0729 11:47:20.770927   69907 main.go:141] libmachine: (embed-certs-731235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:20.770943   69907 main.go:141] libmachine: (embed-certs-731235) DBG | About to run SSH command:
	I0729 11:47:20.770960   69907 main.go:141] libmachine: (embed-certs-731235) DBG | exit 0
	I0729 11:47:20.895074   69907 main.go:141] libmachine: (embed-certs-731235) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:20.895473   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetConfigRaw
	I0729 11:47:20.896121   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:20.898342   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.898673   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.898717   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.899017   69907 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/config.json ...
	I0729 11:47:20.899239   69907 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:20.899262   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:20.899464   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:20.901688   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902056   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.902099   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902249   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:20.902412   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902579   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902715   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:20.902857   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:20.903102   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:20.903118   69907 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:21.007368   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:21.007403   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007682   69907 buildroot.go:166] provisioning hostname "embed-certs-731235"
	I0729 11:47:21.007708   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007928   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.010883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011268   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.011308   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011465   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.011634   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011779   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011950   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.012121   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.012314   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.012334   69907 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-731235 && echo "embed-certs-731235" | sudo tee /etc/hostname
	I0729 11:47:21.129877   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-731235
	
	I0729 11:47:21.129907   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.133055   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133390   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.133411   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133614   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.133806   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.133977   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.134156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.134317   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.134480   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.134495   69907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-731235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-731235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-731235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:21.247997   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:21.248029   69907 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:21.248056   69907 buildroot.go:174] setting up certificates
	I0729 11:47:21.248067   69907 provision.go:84] configureAuth start
	I0729 11:47:21.248075   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.248361   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:21.251377   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251711   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.251738   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251908   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.254107   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254493   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.254521   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254721   69907 provision.go:143] copyHostCerts
	I0729 11:47:21.254788   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:21.254801   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:21.254896   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:21.255008   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:21.255019   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:21.255058   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:21.255138   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:21.255148   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:21.255183   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:21.255257   69907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.embed-certs-731235 san=[127.0.0.1 192.168.61.202 embed-certs-731235 localhost minikube]
	I0729 11:47:21.398780   69907 provision.go:177] copyRemoteCerts
	I0729 11:47:21.398833   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:21.398858   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.401840   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402259   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.402282   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402483   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.402661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.402992   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.403139   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.484883   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 11:47:21.509042   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:47:21.532327   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:21.556013   69907 provision.go:87] duration metric: took 307.934726ms to configureAuth
	I0729 11:47:21.556040   69907 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:21.556258   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:21.556337   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.558962   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559347   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.559372   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559518   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.559699   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.559861   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.560004   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.560157   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.560337   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.560356   69907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:21.834240   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:21.834270   69907 machine.go:97] duration metric: took 935.015622ms to provisionDockerMachine
	I0729 11:47:21.834284   69907 start.go:293] postStartSetup for "embed-certs-731235" (driver="kvm2")
	I0729 11:47:21.834299   69907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:21.834325   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:21.834638   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:21.834671   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.837313   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837712   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.837751   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837857   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.838022   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.838229   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.838357   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.922275   69907 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:21.926932   69907 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:21.926955   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:21.927027   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:21.927136   69907 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:21.927219   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:21.937122   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:21.964493   69907 start.go:296] duration metric: took 130.192874ms for postStartSetup
	I0729 11:47:21.964533   69907 fix.go:56] duration metric: took 19.965288806s for fixHost
	I0729 11:47:21.964554   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.967318   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967652   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.967682   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967850   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.968066   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968222   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968356   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.968509   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.968717   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.968731   69907 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:22.075873   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253642.050121254
	
	I0729 11:47:22.075893   69907 fix.go:216] guest clock: 1722253642.050121254
	I0729 11:47:22.075900   69907 fix.go:229] Guest: 2024-07-29 11:47:22.050121254 +0000 UTC Remote: 2024-07-29 11:47:21.964537244 +0000 UTC m=+243.027106048 (delta=85.58401ms)
	I0729 11:47:22.075927   69907 fix.go:200] guest clock delta is within tolerance: 85.58401ms
	I0729 11:47:22.075933   69907 start.go:83] releasing machines lock for "embed-certs-731235", held for 20.076714897s
	I0729 11:47:22.075958   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.076265   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:22.079236   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079566   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.079604   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079771   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080311   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080491   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080573   69907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:22.080644   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.080719   69907 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:22.080743   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.083401   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083438   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083743   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083904   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083917   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084061   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084378   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084389   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084565   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084573   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.084691   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.188025   69907 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:22.194866   69907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:22.344382   69907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:22.350719   69907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:22.350809   69907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:22.371783   69907 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:22.371814   69907 start.go:495] detecting cgroup driver to use...
	I0729 11:47:22.371874   69907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:22.387899   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:22.401722   69907 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:22.401790   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:22.415295   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:22.429209   69907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:22.541230   69907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:22.705734   69907 docker.go:233] disabling docker service ...
	I0729 11:47:22.705811   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:22.720716   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:22.736719   69907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:22.865574   69907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:22.994470   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:23.018115   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:23.037125   69907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:23.037210   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.048702   69907 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:23.048768   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.061785   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.074734   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.087639   69907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:23.101010   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.113893   69907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.134264   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.147422   69907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:23.158168   69907 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:23.158220   69907 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:23.175245   69907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:23.190456   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:23.314426   69907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:23.459513   69907 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:23.459584   69907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:23.464829   69907 start.go:563] Will wait 60s for crictl version
	I0729 11:47:23.464899   69907 ssh_runner.go:195] Run: which crictl
	I0729 11:47:23.468768   69907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:23.508694   69907 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:23.508811   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.537048   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.569189   69907 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:23.570566   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:23.573554   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.573918   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:23.573946   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.574198   69907 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:23.578543   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:23.591660   69907 kubeadm.go:883] updating cluster {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:23.591803   69907 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:23.591862   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:23.629355   69907 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:23.629423   69907 ssh_runner.go:195] Run: which lz4
	I0729 11:47:23.633713   69907 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:23.638463   69907 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:23.638491   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:22.103288   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Start
	I0729 11:47:22.103502   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring networks are active...
	I0729 11:47:22.104291   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network default is active
	I0729 11:47:22.104576   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network mk-default-k8s-diff-port-754486 is active
	I0729 11:47:22.105037   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Getting domain xml...
	I0729 11:47:22.105746   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Creating domain...
	I0729 11:47:23.370011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting to get IP...
	I0729 11:47:23.370892   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371318   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.371249   71147 retry.go:31] will retry after 303.24713ms: waiting for machine to come up
	I0729 11:47:23.675985   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676540   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.676486   71147 retry.go:31] will retry after 332.87749ms: waiting for machine to come up
	I0729 11:47:24.010822   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011360   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011388   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.011312   71147 retry.go:31] will retry after 465.260924ms: waiting for machine to come up
	I0729 11:47:24.477939   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478471   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478517   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.478431   71147 retry.go:31] will retry after 501.294487ms: waiting for machine to come up
	I0729 11:47:24.981168   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981736   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.981647   71147 retry.go:31] will retry after 522.082731ms: waiting for machine to come up
	I0729 11:47:25.165725   69907 crio.go:462] duration metric: took 1.532044107s to copy over tarball
	I0729 11:47:25.165811   69907 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:27.422770   69907 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256906507s)
	I0729 11:47:27.422807   69907 crio.go:469] duration metric: took 2.257052359s to extract the tarball
	I0729 11:47:27.422817   69907 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:27.460807   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:27.509129   69907 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:27.509157   69907 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:27.509166   69907 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.30.3 crio true true} ...
	I0729 11:47:27.509281   69907 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-731235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:27.509347   69907 ssh_runner.go:195] Run: crio config
	I0729 11:47:27.560098   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:27.560121   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:27.560133   69907 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:27.560152   69907 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-731235 NodeName:embed-certs-731235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:27.560290   69907 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-731235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:27.560345   69907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:27.570464   69907 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:27.570555   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:27.580535   69907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 11:47:27.598211   69907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:27.615318   69907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 11:47:27.632974   69907 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:27.636858   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:27.649277   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:27.763642   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:27.781529   69907 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235 for IP: 192.168.61.202
	I0729 11:47:27.781556   69907 certs.go:194] generating shared ca certs ...
	I0729 11:47:27.781577   69907 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:27.781758   69907 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:27.781812   69907 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:27.781825   69907 certs.go:256] generating profile certs ...
	I0729 11:47:27.781950   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/client.key
	I0729 11:47:27.782036   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key.6ae4b4bc
	I0729 11:47:27.782091   69907 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key
	I0729 11:47:27.782234   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:27.782278   69907 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:27.782291   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:27.782323   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:27.782358   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:27.782388   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:27.782440   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:27.783361   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:27.813522   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:27.841190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:27.877646   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:27.919310   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 11:47:27.952080   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:47:27.985958   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:28.010190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:28.034756   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:28.059541   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:28.083582   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:28.113030   69907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:28.133424   69907 ssh_runner.go:195] Run: openssl version
	I0729 11:47:28.139250   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:28.150142   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154885   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154934   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.160995   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:28.172031   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:28.184289   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189071   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189132   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.194963   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:28.205555   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:28.216328   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221023   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221091   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.227053   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:28.238044   69907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:28.242748   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:28.248989   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:28.255165   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:28.261178   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:28.266997   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:28.272966   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:28.278994   69907 kubeadm.go:392] StartCluster: {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:28.279100   69907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:28.279142   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.317620   69907 cri.go:89] found id: ""
	I0729 11:47:28.317701   69907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:28.328260   69907 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:28.328285   69907 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:28.328365   69907 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:28.338356   69907 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:28.339293   69907 kubeconfig.go:125] found "embed-certs-731235" server: "https://192.168.61.202:8443"
	I0729 11:47:28.341224   69907 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:28.351166   69907 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0729 11:47:28.351203   69907 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:28.351215   69907 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:28.351271   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.393883   69907 cri.go:89] found id: ""
	I0729 11:47:28.393986   69907 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:28.411298   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:28.421328   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:28.421362   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:28.421406   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:47:28.430665   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:28.430746   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:28.440426   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:47:28.450406   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:28.450466   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:28.460200   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.469699   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:28.469771   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.479855   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:47:28.489251   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:28.489346   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:28.499019   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:28.508770   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:28.644277   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:25.505636   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506255   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:25.506195   71147 retry.go:31] will retry after 748.410801ms: waiting for machine to come up
	I0729 11:47:26.255894   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256293   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:26.256252   71147 retry.go:31] will retry after 1.1735659s: waiting for machine to come up
	I0729 11:47:27.430990   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431494   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:27.431400   71147 retry.go:31] will retry after 1.448031075s: waiting for machine to come up
	I0729 11:47:28.880998   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881483   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:28.881413   71147 retry.go:31] will retry after 1.123855306s: waiting for machine to come up
	I0729 11:47:30.006750   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007231   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007261   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:30.007176   71147 retry.go:31] will retry after 2.180202817s: waiting for machine to come up
	I0729 11:47:30.200484   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.556171661s)
	I0729 11:47:30.200515   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.427523   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.499256   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.603274   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:30.603360   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.104293   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.603524   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.621119   69907 api_server.go:72] duration metric: took 1.01784341s to wait for apiserver process to appear ...
	I0729 11:47:31.621152   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:31.621173   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:31.621755   69907 api_server.go:269] stopped: https://192.168.61.202:8443/healthz: Get "https://192.168.61.202:8443/healthz": dial tcp 192.168.61.202:8443: connect: connection refused
	I0729 11:47:32.121931   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:32.188652   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189149   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189200   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:32.189120   71147 retry.go:31] will retry after 2.231222575s: waiting for machine to come up
	I0729 11:47:34.421672   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422102   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422130   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:34.422062   71147 retry.go:31] will retry after 2.830311758s: waiting for machine to come up
	I0729 11:47:34.187391   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.187427   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.187450   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.199953   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.199994   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.621483   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.639389   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:34.639423   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.121653   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.130808   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.130843   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.621391   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.626072   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.626116   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.122245   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.126823   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.126851   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.621364   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.625781   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.625810   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.121848   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.126505   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:37.126537   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.622175   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.628241   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:47:37.634638   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:37.634668   69907 api_server.go:131] duration metric: took 6.013509305s to wait for apiserver health ...
	I0729 11:47:37.634677   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:37.634684   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:37.636740   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:37.638144   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:37.649816   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:37.670562   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:37.680377   69907 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:37.680408   69907 system_pods.go:61] "coredns-7db6d8ff4d-kwx89" [f2a3fdcb-2794-470e-a1b4-fe264fb5613a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:37.680414   69907 system_pods.go:61] "etcd-embed-certs-731235" [a99bcf99-7242-4383-aa2d-597e817004db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:37.680421   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [302c4cda-07d4-46ec-af59-3339a2b91049] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:37.680426   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [dae9ef32-63c1-4865-9569-ea1f11c9526c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:37.680430   69907 system_pods.go:61] "kube-proxy-hw66r" [97610503-7ca0-4d0c-8d73-249f2a48ef9a] Running
	I0729 11:47:37.680436   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [144902be-bea5-493c-986d-3834c22d82d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:37.680445   69907 system_pods.go:61] "metrics-server-569cc877fc-vqgtm" [75d59d71-3fb3-4383-bd90-3362f6b40694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:37.680449   69907 system_pods.go:61] "storage-provisioner" [24f74df4-0657-481b-9af8-f8b5c94684ea] Running
	I0729 11:47:37.680454   69907 system_pods.go:74] duration metric: took 9.870611ms to wait for pod list to return data ...
	I0729 11:47:37.680460   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:37.683573   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:37.683595   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:37.683607   69907 node_conditions.go:105] duration metric: took 3.142611ms to run NodePressure ...
	I0729 11:47:37.683626   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:37.964162   69907 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968288   69907 kubeadm.go:739] kubelet initialised
	I0729 11:47:37.968308   69907 kubeadm.go:740] duration metric: took 4.123333ms waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968316   69907 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:37.972978   69907 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.977070   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977088   69907 pod_ready.go:81] duration metric: took 4.090197ms for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.977097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977102   69907 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.981499   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981535   69907 pod_ready.go:81] duration metric: took 4.424741ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.981543   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981550   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.986064   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986084   69907 pod_ready.go:81] duration metric: took 4.52445ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.986097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986103   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.254312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254680   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254757   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:37.254658   71147 retry.go:31] will retry after 3.980350875s: waiting for machine to come up
	I0729 11:47:42.625107   70480 start.go:364] duration metric: took 3m7.301517115s to acquireMachinesLock for "old-k8s-version-188043"
	I0729 11:47:42.625180   70480 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:42.625189   70480 fix.go:54] fixHost starting: 
	I0729 11:47:42.625660   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:42.625704   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:42.644136   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
	I0729 11:47:42.644661   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:42.645210   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:47:42.645242   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:42.645570   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:42.645748   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:47:42.645875   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetState
	I0729 11:47:42.647762   70480 fix.go:112] recreateIfNeeded on old-k8s-version-188043: state=Stopped err=<nil>
	I0729 11:47:42.647808   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	W0729 11:47:42.647970   70480 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:42.649883   70480 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-188043" ...
	I0729 11:47:39.992010   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:41.992091   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:43.494150   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.494177   69907 pod_ready.go:81] duration metric: took 5.508061336s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.494186   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500158   69907 pod_ready.go:92] pod "kube-proxy-hw66r" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.500186   69907 pod_ready.go:81] duration metric: took 5.992092ms for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500198   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:41.239616   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240073   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Found IP for machine: 192.168.50.111
	I0729 11:47:41.240103   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has current primary IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240110   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserving static IP address...
	I0729 11:47:41.240474   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.240501   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserved static IP address: 192.168.50.111
	I0729 11:47:41.240529   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | skip adding static IP to network mk-default-k8s-diff-port-754486 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"}
	I0729 11:47:41.240549   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Getting to WaitForSSH function...
	I0729 11:47:41.240567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for SSH to be available...
	I0729 11:47:41.242523   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.242938   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.242970   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.243112   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH client type: external
	I0729 11:47:41.243140   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa (-rw-------)
	I0729 11:47:41.243171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:41.243185   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | About to run SSH command:
	I0729 11:47:41.243198   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | exit 0
	I0729 11:47:41.366827   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:41.367268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetConfigRaw
	I0729 11:47:41.367885   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.370241   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370574   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.370605   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370867   70231 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/config.json ...
	I0729 11:47:41.371157   70231 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:41.371184   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:41.371408   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.374380   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374770   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.374805   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374920   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.375098   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375245   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375362   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.375555   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.375784   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.375801   70231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:41.479220   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:41.479262   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479528   70231 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754486"
	I0729 11:47:41.479555   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479744   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.482542   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.482869   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.482903   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.483074   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.483282   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483442   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483611   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.483828   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.484029   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.484048   70231 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754486 && echo "default-k8s-diff-port-754486" | sudo tee /etc/hostname
	I0729 11:47:41.605605   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754486
	
	I0729 11:47:41.605639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.608313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.608698   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608910   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.609126   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609498   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.609650   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.609845   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.609862   70231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754486/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:41.724183   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:41.724209   70231 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:41.724237   70231 buildroot.go:174] setting up certificates
	I0729 11:47:41.724246   70231 provision.go:84] configureAuth start
	I0729 11:47:41.724256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.724530   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.727462   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.727826   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.727858   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.728009   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.730256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.730683   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730768   70231 provision.go:143] copyHostCerts
	I0729 11:47:41.730822   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:41.730835   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:41.730904   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:41.731016   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:41.731026   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:41.731047   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:41.731151   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:41.731161   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:41.731179   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:41.731238   70231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754486 san=[127.0.0.1 192.168.50.111 default-k8s-diff-port-754486 localhost minikube]
	I0729 11:47:41.930044   70231 provision.go:177] copyRemoteCerts
	I0729 11:47:41.930097   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:41.930127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.932832   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933158   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.933186   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933378   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.933565   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.933723   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.933848   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.016885   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:42.042982   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 11:47:42.067813   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:47:42.092573   70231 provision.go:87] duration metric: took 368.315812ms to configureAuth
	I0729 11:47:42.092601   70231 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:42.092761   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:42.092829   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.095761   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096177   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.096223   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096349   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.096571   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096751   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096891   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.097056   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.097234   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.097251   70231 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:42.378448   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:42.378478   70231 machine.go:97] duration metric: took 1.007302295s to provisionDockerMachine
	I0729 11:47:42.378495   70231 start.go:293] postStartSetup for "default-k8s-diff-port-754486" (driver="kvm2")
	I0729 11:47:42.378511   70231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:42.378541   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.378917   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:42.378950   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.382127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382539   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.382567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382759   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.382958   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.383171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.383297   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.467524   70231 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:42.471793   70231 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:42.471815   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:42.471873   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:42.471948   70231 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:42.472033   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:42.482148   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:42.507312   70231 start.go:296] duration metric: took 128.801138ms for postStartSetup
	I0729 11:47:42.507358   70231 fix.go:56] duration metric: took 20.43118839s for fixHost
	I0729 11:47:42.507384   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.510309   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510737   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.510769   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510948   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.511195   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511373   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511537   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.511694   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.511844   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.511853   70231 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:42.624913   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253662.599486483
	
	I0729 11:47:42.624946   70231 fix.go:216] guest clock: 1722253662.599486483
	I0729 11:47:42.624960   70231 fix.go:229] Guest: 2024-07-29 11:47:42.599486483 +0000 UTC Remote: 2024-07-29 11:47:42.507363501 +0000 UTC m=+212.369750509 (delta=92.122982ms)
	I0729 11:47:42.624988   70231 fix.go:200] guest clock delta is within tolerance: 92.122982ms
	I0729 11:47:42.625005   70231 start.go:83] releasing machines lock for "default-k8s-diff-port-754486", held for 20.548870778s
	I0729 11:47:42.625050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.625322   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:42.628299   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.628799   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.628834   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.629011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629659   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629860   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629950   70231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:42.629997   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.630087   70231 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:42.630106   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.633122   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633432   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633464   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.633504   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633890   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.633973   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.634044   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.634088   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.634312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.634387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634489   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.634906   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.635039   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.746128   70231 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:42.754711   70231 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:42.906989   70231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:42.913975   70231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:42.914035   70231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:42.931503   70231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:42.931535   70231 start.go:495] detecting cgroup driver to use...
	I0729 11:47:42.931591   70231 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:42.949385   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:42.965940   70231 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:42.965989   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:42.982952   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:43.000214   70231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:43.123333   70231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:43.266557   70231 docker.go:233] disabling docker service ...
	I0729 11:47:43.266637   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:43.282521   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:43.300091   70231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:43.440721   70231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:43.577985   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:43.598070   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:43.620282   70231 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:43.620343   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.633918   70231 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:43.634064   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.644931   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.660559   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.676307   70231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:43.687970   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.699659   70231 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.718571   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.729820   70231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:43.739921   70231 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:43.740010   70231 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:43.755562   70231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:43.768161   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:43.899531   70231 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:44.057564   70231 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:44.057649   70231 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:44.062669   70231 start.go:563] Will wait 60s for crictl version
	I0729 11:47:44.062751   70231 ssh_runner.go:195] Run: which crictl
	I0729 11:47:44.066815   70231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:44.104368   70231 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:44.104469   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.133158   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.165813   70231 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:44.167192   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:44.170230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170633   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:44.170664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170908   70231 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:44.175609   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:44.188628   70231 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:44.188748   70231 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:44.188811   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:44.229180   70231 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:44.229255   70231 ssh_runner.go:195] Run: which lz4
	I0729 11:47:44.233985   70231 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:44.238236   70231 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:44.238276   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:42.651248   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .Start
	I0729 11:47:42.651431   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring networks are active...
	I0729 11:47:42.652248   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network default is active
	I0729 11:47:42.652632   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network mk-old-k8s-version-188043 is active
	I0729 11:47:42.653108   70480 main.go:141] libmachine: (old-k8s-version-188043) Getting domain xml...
	I0729 11:47:42.653902   70480 main.go:141] libmachine: (old-k8s-version-188043) Creating domain...
	I0729 11:47:43.961872   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting to get IP...
	I0729 11:47:43.962928   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:43.963321   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:43.963424   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:43.963311   71329 retry.go:31] will retry after 266.698669ms: waiting for machine to come up
	I0729 11:47:44.231914   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.232480   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.232507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.232428   71329 retry.go:31] will retry after 363.884342ms: waiting for machine to come up
	I0729 11:47:44.598046   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.598507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.598530   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.598465   71329 retry.go:31] will retry after 486.214401ms: waiting for machine to come up
	I0729 11:47:45.085968   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.086466   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.086511   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.086440   71329 retry.go:31] will retry after 451.181437ms: waiting for machine to come up
	I0729 11:47:44.508165   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:44.508190   69907 pod_ready.go:81] duration metric: took 1.007982605s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:44.508199   69907 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:46.515466   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:48.515797   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:45.761961   70231 crio.go:462] duration metric: took 1.528001524s to copy over tarball
	I0729 11:47:45.762103   70231 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:48.135637   70231 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.373497372s)
	I0729 11:47:48.135673   70231 crio.go:469] duration metric: took 2.373677697s to extract the tarball
	I0729 11:47:48.135683   70231 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:48.173007   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:48.222120   70231 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:48.222146   70231 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:48.222156   70231 kubeadm.go:934] updating node { 192.168.50.111 8444 v1.30.3 crio true true} ...
	I0729 11:47:48.222294   70231 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:48.222372   70231 ssh_runner.go:195] Run: crio config
	I0729 11:47:48.269094   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:48.269122   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:48.269149   70231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:48.269175   70231 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.111 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754486 NodeName:default-k8s-diff-port-754486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:48.269394   70231 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754486"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:48.269469   70231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:48.282748   70231 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:48.282830   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:48.292857   70231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 11:47:48.312165   70231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:48.332206   70231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 11:47:48.350385   70231 ssh_runner.go:195] Run: grep 192.168.50.111	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:48.354603   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:48.368166   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:48.505072   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:48.525399   70231 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486 for IP: 192.168.50.111
	I0729 11:47:48.525436   70231 certs.go:194] generating shared ca certs ...
	I0729 11:47:48.525457   70231 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:48.525622   70231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:48.525678   70231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:48.525691   70231 certs.go:256] generating profile certs ...
	I0729 11:47:48.525783   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/client.key
	I0729 11:47:48.525863   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key.0ed2faa3
	I0729 11:47:48.525927   70231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key
	I0729 11:47:48.526076   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:48.526124   70231 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:48.526138   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:48.526169   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:48.526211   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:48.526241   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:48.526289   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:48.527026   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:48.567953   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:48.605538   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:48.639615   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:48.678439   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 11:47:48.722664   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:47:48.757436   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:48.797241   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:48.825666   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:48.856344   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:48.882046   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:48.909963   70231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:48.928513   70231 ssh_runner.go:195] Run: openssl version
	I0729 11:47:48.934467   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:48.945606   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950533   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950585   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.957222   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:48.969043   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:48.981101   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986095   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986161   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.992153   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:49.004358   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:49.016204   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021070   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021131   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.027503   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:49.038545   70231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:49.043602   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:49.050327   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:49.056648   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:49.063624   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:49.071491   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:49.080125   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:49.086622   70231 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:49.086771   70231 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:49.086845   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.131483   70231 cri.go:89] found id: ""
	I0729 11:47:49.131580   70231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:49.143222   70231 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:49.143246   70231 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:49.143296   70231 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:49.155447   70231 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:49.156410   70231 kubeconfig.go:125] found "default-k8s-diff-port-754486" server: "https://192.168.50.111:8444"
	I0729 11:47:49.158477   70231 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:49.171515   70231 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.111
	I0729 11:47:49.171546   70231 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:49.171558   70231 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:49.171614   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.218584   70231 cri.go:89] found id: ""
	I0729 11:47:49.218656   70231 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:49.237934   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:49.249188   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:49.249213   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:49.249276   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:47:49.260033   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:49.260100   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:49.270588   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:47:49.280326   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:49.280422   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:49.291754   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.301918   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:49.302005   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.312861   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:47:49.323545   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:49.323614   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:49.335556   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:49.347161   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:49.467792   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:45.538886   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.539448   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.539474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.539387   71329 retry.go:31] will retry after 730.083235ms: waiting for machine to come up
	I0729 11:47:46.270923   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.271428   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.271457   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.271383   71329 retry.go:31] will retry after 699.351116ms: waiting for machine to come up
	I0729 11:47:46.973033   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.973661   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.973689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.973606   71329 retry.go:31] will retry after 804.992714ms: waiting for machine to come up
	I0729 11:47:47.780217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:47.780676   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:47.780727   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:47.780651   71329 retry.go:31] will retry after 1.242092613s: waiting for machine to come up
	I0729 11:47:49.024835   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:49.025362   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:49.025384   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:49.025314   71329 retry.go:31] will retry after 1.505477936s: waiting for machine to come up
	I0729 11:47:51.014115   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:53.015922   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:50.213363   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.427510   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.489221   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.574558   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:50.574648   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.075420   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.574892   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.612604   70231 api_server.go:72] duration metric: took 1.038045496s to wait for apiserver process to appear ...
	I0729 11:47:51.612635   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:51.612656   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:51.613131   70231 api_server.go:269] stopped: https://192.168.50.111:8444/healthz: Get "https://192.168.50.111:8444/healthz": dial tcp 192.168.50.111:8444: connect: connection refused
	I0729 11:47:52.113045   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.008828   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.008861   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.008877   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.080000   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.080047   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.113269   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.123263   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.123301   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:50.532816   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:50.533325   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:50.533354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:50.533285   71329 retry.go:31] will retry after 1.877421564s: waiting for machine to come up
	I0729 11:47:52.413018   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:52.413474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:52.413500   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:52.413405   71329 retry.go:31] will retry after 2.017909532s: waiting for machine to come up
	I0729 11:47:54.432996   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:54.433478   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:54.433506   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:54.433444   71329 retry.go:31] will retry after 2.469357423s: waiting for machine to come up
	I0729 11:47:55.612793   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.617264   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.617299   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.112811   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.119382   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:56.119410   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.612944   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.617383   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:47:56.623760   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:56.623786   70231 api_server.go:131] duration metric: took 5.011145377s to wait for apiserver health ...
	I0729 11:47:56.623795   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:56.623801   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:56.625608   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:55.018201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:57.514432   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.626901   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:56.638585   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:56.661631   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:56.671881   70231 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:56.671908   70231 system_pods.go:61] "coredns-7db6d8ff4d-d4frq" [e495bc30-3c10-4d07-b488-4dbe9b0bfb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:56.671916   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [de3378a8-9a12-4c4b-a6e6-61b19950d5a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:56.671924   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [36c2cd1b-d9de-463e-b343-661d5f14f4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:56.671934   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [6239a1ee-9f7d-4d9b-9d70-5659c7b08fbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:56.671941   70231 system_pods.go:61] "kube-proxy-4bbt5" [4e672275-1afe-4f11-80e2-62aa220e9994] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:47:56.671947   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [81b7d1ed-0163-43fb-8111-048d48efa13c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:56.671954   70231 system_pods.go:61] "metrics-server-569cc877fc-v94xq" [a34d0cd0-1049-4cb4-ae4b-d0c8d34fda13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:56.671959   70231 system_pods.go:61] "storage-provisioner" [a10d68bf-f23d-4871-9041-1e66aa089342] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:47:56.671967   70231 system_pods.go:74] duration metric: took 10.316696ms to wait for pod list to return data ...
	I0729 11:47:56.671974   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:56.677342   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:56.677368   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:56.677380   70231 node_conditions.go:105] duration metric: took 5.400925ms to run NodePressure ...
	I0729 11:47:56.677400   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:56.985230   70231 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990270   70231 kubeadm.go:739] kubelet initialised
	I0729 11:47:56.990297   70231 kubeadm.go:740] duration metric: took 5.038002ms waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990307   70231 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:56.995626   70231 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.002678   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002729   70231 pod_ready.go:81] duration metric: took 7.079039ms for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.002742   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002749   70231 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.007474   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007500   70231 pod_ready.go:81] duration metric: took 4.741617ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.007510   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007516   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.012437   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012464   70231 pod_ready.go:81] duration metric: took 4.941759ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.012474   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012480   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.065060   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065103   70231 pod_ready.go:81] duration metric: took 52.614137ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.065124   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065133   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465390   70231 pod_ready.go:92] pod "kube-proxy-4bbt5" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:57.465414   70231 pod_ready.go:81] duration metric: took 400.26956ms for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465423   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:59.475067   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.904201   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:56.904719   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:56.904754   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:56.904677   71329 retry.go:31] will retry after 4.224733299s: waiting for machine to come up
	I0729 11:48:02.473126   69419 start.go:364] duration metric: took 55.472263119s to acquireMachinesLock for "no-preload-297799"
	I0729 11:48:02.473181   69419 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:48:02.473195   69419 fix.go:54] fixHost starting: 
	I0729 11:48:02.473581   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:48:02.473611   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:48:02.491458   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0729 11:48:02.491939   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:48:02.492393   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:48:02.492411   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:48:02.492790   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:48:02.492983   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:02.493133   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:48:02.494640   69419 fix.go:112] recreateIfNeeded on no-preload-297799: state=Stopped err=<nil>
	I0729 11:48:02.494666   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	W0729 11:48:02.494878   69419 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:48:02.497014   69419 out.go:177] * Restarting existing kvm2 VM for "no-preload-297799" ...
	I0729 11:47:59.514514   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.515573   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.516078   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.132270   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132669   70480 main.go:141] libmachine: (old-k8s-version-188043) Found IP for machine: 192.168.72.61
	I0729 11:48:01.132695   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has current primary IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132702   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserving static IP address...
	I0729 11:48:01.133140   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.133173   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | skip adding static IP to network mk-old-k8s-version-188043 - found existing host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"}
	I0729 11:48:01.133189   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserved static IP address: 192.168.72.61
	I0729 11:48:01.133203   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting for SSH to be available...
	I0729 11:48:01.133217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Getting to WaitForSSH function...
	I0729 11:48:01.135427   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135759   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.135786   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135866   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH client type: external
	I0729 11:48:01.135894   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa (-rw-------)
	I0729 11:48:01.135931   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:01.135949   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | About to run SSH command:
	I0729 11:48:01.135986   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | exit 0
	I0729 11:48:01.262756   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:01.263165   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetConfigRaw
	I0729 11:48:01.263828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.266322   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266662   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.266689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266930   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:48:01.267115   70480 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:01.267132   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:01.267357   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.269973   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270331   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.270354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270502   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.270686   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270863   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.271183   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.271391   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.271405   70480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:01.383463   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:01.383492   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383771   70480 buildroot.go:166] provisioning hostname "old-k8s-version-188043"
	I0729 11:48:01.383795   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.387076   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387411   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.387449   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387583   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.387776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.387929   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.388052   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.388237   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.388396   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.388409   70480 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-188043 && echo "old-k8s-version-188043" | sudo tee /etc/hostname
	I0729 11:48:01.519219   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-188043
	
	I0729 11:48:01.519252   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.521972   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522356   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.522385   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522533   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.522755   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.522955   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.523074   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.523276   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.523452   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.523470   70480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-188043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-188043/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-188043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:01.644387   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:01.644421   70480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:01.644468   70480 buildroot.go:174] setting up certificates
	I0729 11:48:01.644480   70480 provision.go:84] configureAuth start
	I0729 11:48:01.644499   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.644781   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.647721   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648133   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.648162   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648322   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.650422   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.650857   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.650883   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.651028   70480 provision.go:143] copyHostCerts
	I0729 11:48:01.651088   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:01.651101   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:01.651160   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:01.651249   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:01.651257   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:01.651277   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:01.651329   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:01.651336   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:01.651352   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:01.651408   70480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-188043 san=[127.0.0.1 192.168.72.61 localhost minikube old-k8s-version-188043]
	I0729 11:48:01.754387   70480 provision.go:177] copyRemoteCerts
	I0729 11:48:01.754442   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:01.754468   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.757420   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.757770   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.757803   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.758031   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.758220   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.758416   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.758574   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:01.845306   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:01.872221   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 11:48:01.897935   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:01.924742   70480 provision.go:87] duration metric: took 280.248018ms to configureAuth
	I0729 11:48:01.924780   70480 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:01.924957   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:48:01.925042   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.927450   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.927780   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.927949   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.927956   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.928160   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928344   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928511   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.928677   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.928831   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.928850   70480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:02.213344   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:02.213375   70480 machine.go:97] duration metric: took 946.247614ms to provisionDockerMachine
	I0729 11:48:02.213405   70480 start.go:293] postStartSetup for "old-k8s-version-188043" (driver="kvm2")
	I0729 11:48:02.213422   70480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:02.213469   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.213811   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:02.213869   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.216897   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217219   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.217253   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217388   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.217603   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.217776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.217957   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.305894   70480 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:02.310875   70480 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:02.310907   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:02.310982   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:02.311105   70480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:02.311264   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:02.321616   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:02.349732   70480 start.go:296] duration metric: took 136.310898ms for postStartSetup
	I0729 11:48:02.349788   70480 fix.go:56] duration metric: took 19.724598498s for fixHost
	I0729 11:48:02.349828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.352855   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353194   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.353226   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353348   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.353575   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353802   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353983   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.354172   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:02.354428   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:02.354445   70480 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:02.472983   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253682.444336856
	
	I0729 11:48:02.473008   70480 fix.go:216] guest clock: 1722253682.444336856
	I0729 11:48:02.473017   70480 fix.go:229] Guest: 2024-07-29 11:48:02.444336856 +0000 UTC Remote: 2024-07-29 11:48:02.349793034 +0000 UTC m=+207.164903125 (delta=94.543822ms)
	I0729 11:48:02.473042   70480 fix.go:200] guest clock delta is within tolerance: 94.543822ms
	I0729 11:48:02.473049   70480 start.go:83] releasing machines lock for "old-k8s-version-188043", held for 19.847898477s
	I0729 11:48:02.473077   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.473414   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:02.476555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.476980   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.477010   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.477343   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.477892   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478095   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478187   70480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:02.478230   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.478285   70480 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:02.478331   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.481065   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481223   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481484   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481626   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.481665   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481705   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481845   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.481958   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.482032   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482114   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.482302   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482316   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.482466   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.569312   70480 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:02.593778   70480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:02.744117   70480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:02.752292   70480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:02.752380   70480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:02.774488   70480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:02.774515   70480 start.go:495] detecting cgroup driver to use...
	I0729 11:48:02.774581   70480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:02.793491   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:02.814235   70480 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:02.814294   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:02.832545   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:02.848433   70480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:02.980201   70480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:03.163341   70480 docker.go:233] disabling docker service ...
	I0729 11:48:03.163420   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:03.178758   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:03.197879   70480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:03.331372   70480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:03.462987   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:03.480583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:03.504239   70480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 11:48:03.504312   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.518440   70480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:03.518494   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.532072   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.543435   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.555232   70480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:03.568372   70480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:03.582423   70480 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:03.582482   70480 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:03.601891   70480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:03.614380   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:03.741008   70480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:03.896807   70480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:03.896878   70480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:03.903541   70480 start.go:563] Will wait 60s for crictl version
	I0729 11:48:03.903604   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:03.908408   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:03.952850   70480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:03.952946   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:03.984204   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:04.018650   70480 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 11:48:02.498447   69419 main.go:141] libmachine: (no-preload-297799) Calling .Start
	I0729 11:48:02.498626   69419 main.go:141] libmachine: (no-preload-297799) Ensuring networks are active...
	I0729 11:48:02.499540   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network default is active
	I0729 11:48:02.499967   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network mk-no-preload-297799 is active
	I0729 11:48:02.500446   69419 main.go:141] libmachine: (no-preload-297799) Getting domain xml...
	I0729 11:48:02.501250   69419 main.go:141] libmachine: (no-preload-297799) Creating domain...
	I0729 11:48:03.852498   69419 main.go:141] libmachine: (no-preload-297799) Waiting to get IP...
	I0729 11:48:03.853523   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:03.853951   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:03.854006   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:03.853917   71505 retry.go:31] will retry after 199.060788ms: waiting for machine to come up
	I0729 11:48:04.054348   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.054940   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.054968   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.054888   71505 retry.go:31] will retry after 285.962971ms: waiting for machine to come up
	I0729 11:48:04.342491   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.343050   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.343075   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.343003   71505 retry.go:31] will retry after 363.613745ms: waiting for machine to come up
	I0729 11:48:01.973091   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.972466   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:03.972492   70231 pod_ready.go:81] duration metric: took 6.507061375s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:03.972504   70231 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:04.020067   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:04.023182   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023542   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:04.023571   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023796   70480 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:04.028450   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:04.042324   70480 kubeadm.go:883] updating cluster {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:04.042474   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:48:04.042540   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:04.092644   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:04.092699   70480 ssh_runner.go:195] Run: which lz4
	I0729 11:48:04.096834   70480 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:48:04.101297   70480 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:48:04.101328   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 11:48:05.518740   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:08.014306   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:04.708829   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.709447   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.709480   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.709349   71505 retry.go:31] will retry after 458.384125ms: waiting for machine to come up
	I0729 11:48:05.169214   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.169896   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.169930   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.169845   71505 retry.go:31] will retry after 647.103993ms: waiting for machine to come up
	I0729 11:48:05.818415   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.819017   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.819043   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.818969   71505 retry.go:31] will retry after 857.973397ms: waiting for machine to come up
	I0729 11:48:06.678181   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:06.678732   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:06.678756   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:06.678668   71505 retry.go:31] will retry after 928.705904ms: waiting for machine to come up
	I0729 11:48:07.609326   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:07.609866   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:07.609890   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:07.609822   71505 retry.go:31] will retry after 1.262269934s: waiting for machine to come up
	I0729 11:48:08.874373   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:08.874820   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:08.874850   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:08.874758   71505 retry.go:31] will retry after 1.824043731s: waiting for machine to come up
	I0729 11:48:05.980579   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:07.982513   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:05.929023   70480 crio.go:462] duration metric: took 1.832213096s to copy over tarball
	I0729 11:48:05.929116   70480 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:48:09.096321   70480 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.167180641s)
	I0729 11:48:09.096346   70480 crio.go:469] duration metric: took 3.16729016s to extract the tarball
	I0729 11:48:09.096353   70480 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:48:09.154049   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:09.193067   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:09.193104   70480 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:09.193246   70480 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.193211   70480 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.193280   70480 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 11:48:09.193282   70480 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.193298   70480 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.193217   70480 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.193270   70480 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.193261   70480 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.194797   70480 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194815   70480 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.194846   70480 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.194885   70480 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 11:48:09.194789   70480 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.195165   70480 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.412658   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 11:48:09.423832   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.465209   70480 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 11:48:09.465256   70480 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 11:48:09.465312   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.475896   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 11:48:09.475946   70480 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 11:48:09.475983   70480 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.476028   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.511579   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.511606   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 11:48:09.547233   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 11:48:09.554773   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.566944   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.567736   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.574369   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.587705   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.650871   70480 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 11:48:09.650920   70480 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.650969   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692828   70480 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 11:48:09.692876   70480 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 11:48:09.692913   70480 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.692884   70480 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.692968   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692989   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.711985   70480 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 11:48:09.712027   70480 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.712073   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.713847   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.713883   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.713918   70480 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 11:48:09.713950   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.713959   70480 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.713997   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.718719   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.820952   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 11:48:09.820988   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 11:48:09.821051   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.821058   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 11:48:09.821142   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 11:48:09.861235   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 11:48:10.107019   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:10.014549   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.016206   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.701733   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:10.702238   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:10.702283   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:10.702199   71505 retry.go:31] will retry after 2.128592394s: waiting for machine to come up
	I0729 11:48:12.832803   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:12.833342   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:12.833364   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:12.833290   71505 retry.go:31] will retry after 2.45224359s: waiting for machine to come up
	I0729 11:48:10.479461   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.482426   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:14.978814   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.251097   70480 cache_images.go:92] duration metric: took 1.057971213s to LoadCachedImages
	W0729 11:48:10.251194   70480 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0729 11:48:10.251210   70480 kubeadm.go:934] updating node { 192.168.72.61 8443 v1.20.0 crio true true} ...
	I0729 11:48:10.251341   70480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-188043 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:10.251447   70480 ssh_runner.go:195] Run: crio config
	I0729 11:48:10.310573   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:48:10.310594   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:10.310601   70480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:10.310618   70480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.61 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-188043 NodeName:old-k8s-version-188043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 11:48:10.310786   70480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-188043"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:10.310847   70480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 11:48:10.321378   70480 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:10.321463   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:10.331285   70480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 11:48:10.348814   70480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:48:10.368593   70480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 11:48:10.387619   70480 ssh_runner.go:195] Run: grep 192.168.72.61	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:10.392096   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:10.405499   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:10.532293   70480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:10.549883   70480 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043 for IP: 192.168.72.61
	I0729 11:48:10.549910   70480 certs.go:194] generating shared ca certs ...
	I0729 11:48:10.549930   70480 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:10.550110   70480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:10.550166   70480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:10.550180   70480 certs.go:256] generating profile certs ...
	I0729 11:48:10.550299   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.key
	I0729 11:48:10.550376   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4
	I0729 11:48:10.550428   70480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key
	I0729 11:48:10.550564   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:10.550604   70480 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:10.550617   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:10.550648   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:10.550678   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:10.550730   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:10.550787   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:10.551421   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:10.588571   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:10.644840   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:10.697757   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:10.755085   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 11:48:10.800901   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:10.834650   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:10.866662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:10.895657   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:10.923565   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:10.948662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:10.976094   70480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:10.998391   70480 ssh_runner.go:195] Run: openssl version
	I0729 11:48:11.006024   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:11.019080   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024428   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024498   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.031468   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:11.043919   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:11.055933   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061208   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061284   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.067590   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:11.079323   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:11.091404   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096768   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096838   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.103571   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:11.116286   70480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:11.121569   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:11.128426   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:11.134679   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:11.141595   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:11.148605   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:11.155402   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:11.162290   70480 kubeadm.go:392] StartCluster: {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:11.162394   70480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:11.162441   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.206880   70480 cri.go:89] found id: ""
	I0729 11:48:11.206962   70480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:11.218154   70480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:11.218187   70480 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:11.218253   70480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:11.228734   70480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:11.229980   70480 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:48:11.230942   70480 kubeconfig.go:62] /home/jenkins/minikube-integration/19337-3845/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-188043" cluster setting kubeconfig missing "old-k8s-version-188043" context setting]
	I0729 11:48:11.231876   70480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:11.347256   70480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:11.358515   70480 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.61
	I0729 11:48:11.358654   70480 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:11.358668   70480 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:11.358753   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.402396   70480 cri.go:89] found id: ""
	I0729 11:48:11.402470   70480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:11.420623   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:11.431963   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:11.431989   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:11.432060   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:11.442517   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:11.442584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:11.454407   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:11.465534   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:11.465607   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:11.477716   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.489553   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:11.489625   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.501776   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:11.514863   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:11.514931   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:11.526206   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:11.536583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:11.674846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:12.763352   70480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.088463809s)
	I0729 11:48:12.763396   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.039246   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.168621   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.295910   70480 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:13.296008   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:13.797084   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.296523   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.796520   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.515092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:17.014806   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.287937   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:15.288420   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:15.288447   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:15.288378   71505 retry.go:31] will retry after 2.298011171s: waiting for machine to come up
	I0729 11:48:17.587882   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:17.588283   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:17.588317   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:17.588242   71505 retry.go:31] will retry after 3.770149633s: waiting for machine to come up
	I0729 11:48:16.979006   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:18.979673   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.296251   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:15.796738   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.296361   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.796755   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.296237   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.796188   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.296099   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.796433   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.296864   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.796931   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.514721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.515056   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.515218   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.363217   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363766   69419 main.go:141] libmachine: (no-preload-297799) Found IP for machine: 192.168.39.120
	I0729 11:48:21.363823   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has current primary IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363832   69419 main.go:141] libmachine: (no-preload-297799) Reserving static IP address...
	I0729 11:48:21.364272   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.364319   69419 main.go:141] libmachine: (no-preload-297799) DBG | skip adding static IP to network mk-no-preload-297799 - found existing host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"}
	I0729 11:48:21.364334   69419 main.go:141] libmachine: (no-preload-297799) Reserved static IP address: 192.168.39.120
	I0729 11:48:21.364351   69419 main.go:141] libmachine: (no-preload-297799) Waiting for SSH to be available...
	I0729 11:48:21.364386   69419 main.go:141] libmachine: (no-preload-297799) DBG | Getting to WaitForSSH function...
	I0729 11:48:21.366601   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.366955   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.366998   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.367110   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH client type: external
	I0729 11:48:21.367157   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa (-rw-------)
	I0729 11:48:21.367203   69419 main.go:141] libmachine: (no-preload-297799) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:21.367222   69419 main.go:141] libmachine: (no-preload-297799) DBG | About to run SSH command:
	I0729 11:48:21.367233   69419 main.go:141] libmachine: (no-preload-297799) DBG | exit 0
	I0729 11:48:21.494963   69419 main.go:141] libmachine: (no-preload-297799) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:21.495323   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetConfigRaw
	I0729 11:48:21.495901   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.498624   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499005   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.499033   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499332   69419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/config.json ...
	I0729 11:48:21.499542   69419 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:21.499561   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:21.499749   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.501857   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502237   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.502259   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502360   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.502527   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502693   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502852   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.503009   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.503209   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.503226   69419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:21.614994   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:21.615026   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615271   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:48:21.615299   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615483   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.617734   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618050   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.618082   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618192   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.618378   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618539   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618640   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.618818   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.619004   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.619019   69419 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-297799 && echo "no-preload-297799" | sudo tee /etc/hostname
	I0729 11:48:21.747538   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-297799
	
	I0729 11:48:21.747567   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.750275   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750618   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.750649   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750791   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.751003   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751179   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751302   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.751508   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.751695   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.751716   69419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-297799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-297799/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-297799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:21.877638   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:21.877665   69419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:21.877688   69419 buildroot.go:174] setting up certificates
	I0729 11:48:21.877699   69419 provision.go:84] configureAuth start
	I0729 11:48:21.877710   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.877988   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.880318   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880703   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.880730   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880918   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.883184   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883495   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.883525   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883645   69419 provision.go:143] copyHostCerts
	I0729 11:48:21.883693   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:21.883702   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:21.883757   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:21.883845   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:21.883852   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:21.883872   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:21.883925   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:21.883932   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:21.883948   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:21.883992   69419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.no-preload-297799 san=[127.0.0.1 192.168.39.120 localhost minikube no-preload-297799]
	I0729 11:48:22.283775   69419 provision.go:177] copyRemoteCerts
	I0729 11:48:22.283828   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:22.283854   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.286584   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.286954   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.286981   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.287114   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.287333   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.287503   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.287643   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.373551   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:22.401345   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 11:48:22.427243   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:22.452826   69419 provision.go:87] duration metric: took 575.112676ms to configureAuth
	I0729 11:48:22.452864   69419 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:22.453068   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:48:22.453140   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.455748   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456205   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.456237   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456444   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.456664   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456824   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456980   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.457113   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.457317   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.457340   69419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:22.736637   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:22.736667   69419 machine.go:97] duration metric: took 1.237111694s to provisionDockerMachine
	I0729 11:48:22.736682   69419 start.go:293] postStartSetup for "no-preload-297799" (driver="kvm2")
	I0729 11:48:22.736697   69419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:22.736716   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.737054   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:22.737080   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.739895   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740266   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.740299   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740437   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.740660   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.740810   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.740981   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.825483   69419 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:22.829745   69419 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:22.829765   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:22.829844   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:22.829961   69419 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:22.830063   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:22.839702   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:22.864154   69419 start.go:296] duration metric: took 127.451011ms for postStartSetup
	I0729 11:48:22.864200   69419 fix.go:56] duration metric: took 20.391004348s for fixHost
	I0729 11:48:22.864225   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.867047   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867522   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.867547   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867685   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.867897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868100   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868278   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.868442   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.868619   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.868634   69419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:22.979862   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253702.953940258
	
	I0729 11:48:22.979883   69419 fix.go:216] guest clock: 1722253702.953940258
	I0729 11:48:22.979892   69419 fix.go:229] Guest: 2024-07-29 11:48:22.953940258 +0000 UTC Remote: 2024-07-29 11:48:22.864205522 +0000 UTC m=+358.454662216 (delta=89.734736ms)
	I0729 11:48:22.979909   69419 fix.go:200] guest clock delta is within tolerance: 89.734736ms
	I0729 11:48:22.979916   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 20.506763382s
	I0729 11:48:22.979934   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.980178   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:22.983034   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983379   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.983407   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983569   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984174   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984345   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984440   69419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:22.984481   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.984593   69419 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:22.984620   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.987121   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987251   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987503   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987530   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987631   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987653   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987657   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987846   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.987853   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987984   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.988013   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988070   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988193   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.988190   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:23.101778   69419 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:23.108052   69419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:23.255523   69419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:23.261797   69419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:23.261872   69419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:23.279975   69419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:23.280003   69419 start.go:495] detecting cgroup driver to use...
	I0729 11:48:23.280070   69419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:23.296550   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:23.312947   69419 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:23.313014   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:23.327611   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:23.341549   69419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:23.465776   69419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:23.613763   69419 docker.go:233] disabling docker service ...
	I0729 11:48:23.613827   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:23.628485   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:23.641792   69419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:23.775749   69419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:23.912809   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:23.927782   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:23.947081   69419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 11:48:23.947153   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.957920   69419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:23.958002   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.968380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.979429   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.990529   69419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:24.001380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.012490   69419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.031852   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.042914   69419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:24.052901   69419 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:24.052958   69419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:24.065797   69419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:24.075298   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:24.212796   69419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:24.364082   69419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:24.364169   69419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:24.369778   69419 start.go:563] Will wait 60s for crictl version
	I0729 11:48:24.369838   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.373750   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:24.417141   69419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:24.417232   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.447170   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.491940   69419 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 11:48:21.481453   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.482213   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:20.296052   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:20.796633   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.296412   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.797025   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.296524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.796719   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.296741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.796133   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.296709   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.796699   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.515715   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:27.515900   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:24.493306   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:24.495927   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496432   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:24.496479   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496678   69419 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:24.501092   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:24.516305   69419 kubeadm.go:883] updating cluster {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:24.516452   69419 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 11:48:24.516524   69419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:24.558195   69419 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 11:48:24.558221   69419 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:24.558261   69419 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.558295   69419 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.558340   69419 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.558344   69419 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.558377   69419 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 11:48:24.558394   69419 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.558441   69419 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.558359   69419 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 11:48:24.559657   69419 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.559681   69419 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.559700   69419 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.559628   69419 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.559635   69419 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.559896   69419 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.717545   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.722347   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.724891   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.736099   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.738159   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.746232   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 11:48:24.754163   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.781677   69419 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 11:48:24.781726   69419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.781777   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.850443   69419 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 11:48:24.850478   69419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.850527   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.872953   69419 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 11:48:24.872991   69419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.873031   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908765   69419 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 11:48:24.908814   69419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.908869   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908933   69419 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 11:48:24.908969   69419 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.909008   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006764   69419 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 11:48:25.006808   69419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.006862   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006897   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:25.006908   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:25.006942   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:25.006982   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:25.007025   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:25.108737   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 11:48:25.108786   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.108843   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.109411   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109455   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109473   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 11:48:25.109491   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109530   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109543   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:25.124038   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.124154   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.161374   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161395   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 11:48:25.161411   69419 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161435   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161455   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161483   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161495   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161463   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 11:48:25.161532   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 11:48:25.430934   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983350   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (3.821838647s)
	I0729 11:48:28.983392   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 11:48:28.983487   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.822003707s)
	I0729 11:48:28.983512   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 11:48:28.983529   69419 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.552560815s)
	I0729 11:48:28.983541   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983566   69419 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 11:48:28.983600   69419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983615   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983636   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.981755   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:28.481454   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:25.296898   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.796712   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.297094   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.796384   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.296247   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.296890   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.796799   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.296947   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.015895   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:32.537283   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.876700   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.893055249s)
	I0729 11:48:30.876727   69419 ssh_runner.go:235] Completed: which crictl: (1.893072604s)
	I0729 11:48:30.876791   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:30.876737   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 11:48:30.876867   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.876921   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.925907   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 11:48:30.926007   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:32.689310   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.812361674s)
	I0729 11:48:32.689348   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 11:48:32.689380   69419 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689330   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.763306985s)
	I0729 11:48:32.689433   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689437   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 11:48:30.979444   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:33.480260   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.297007   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.797055   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.296172   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.796379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.296834   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.796689   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.296129   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.796275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.297038   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.014380   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:37.015050   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:34.662663   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.973206225s)
	I0729 11:48:34.662715   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 11:48:34.662742   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:34.662794   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:36.619459   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.956638565s)
	I0729 11:48:36.619486   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 11:48:36.619509   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:36.619565   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:38.577482   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.95789492s)
	I0729 11:48:38.577507   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 11:48:38.577529   69419 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:38.577568   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:39.229623   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 11:48:39.229672   69419 cache_images.go:123] Successfully loaded all cached images
	I0729 11:48:39.229679   69419 cache_images.go:92] duration metric: took 14.67144672s to LoadCachedImages
	I0729 11:48:39.229693   69419 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.0-beta.0 crio true true} ...
	I0729 11:48:39.229817   69419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-297799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:39.229881   69419 ssh_runner.go:195] Run: crio config
	I0729 11:48:39.275907   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:39.275926   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:39.275934   69419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:39.275954   69419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-297799 NodeName:no-preload-297799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:48:39.276122   69419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-297799"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:39.276192   69419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 11:48:39.286552   69419 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:39.286610   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:39.296058   69419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 11:48:39.318154   69419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 11:48:39.335437   69419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 11:48:39.354036   69419 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:39.358009   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:39.370253   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:35.994913   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:38.483330   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:35.297061   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.796828   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.296156   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.796245   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.297045   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.796103   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.296453   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.796146   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.296670   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.796357   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.016488   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:41.515245   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:39.512699   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:39.531458   69419 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799 for IP: 192.168.39.120
	I0729 11:48:39.531482   69419 certs.go:194] generating shared ca certs ...
	I0729 11:48:39.531502   69419 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:39.531676   69419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:39.531730   69419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:39.531743   69419 certs.go:256] generating profile certs ...
	I0729 11:48:39.531841   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/client.key
	I0729 11:48:39.531928   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key.7b715e25
	I0729 11:48:39.531975   69419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key
	I0729 11:48:39.532117   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:39.532153   69419 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:39.532167   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:39.532197   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:39.532227   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:39.532258   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:39.532304   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:39.532940   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:39.571271   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:39.596824   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:39.622112   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:39.655054   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 11:48:39.693252   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:39.717845   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:39.746725   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:39.772098   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:39.798075   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:39.824675   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:39.849863   69419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:39.867759   69419 ssh_runner.go:195] Run: openssl version
	I0729 11:48:39.874159   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:39.885596   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890166   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890229   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.896413   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:39.907803   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:39.920270   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925216   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925279   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.931316   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:39.942774   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:39.954592   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959366   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959422   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.965437   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:39.976951   69419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:39.983054   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:39.989909   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:39.995930   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:40.002178   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:40.008426   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:40.014841   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:40.021729   69419 kubeadm.go:392] StartCluster: {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:40.021848   69419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:40.021908   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.075370   69419 cri.go:89] found id: ""
	I0729 11:48:40.075473   69419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:40.086268   69419 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:40.086293   69419 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:40.086367   69419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:40.097168   69419 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:40.098369   69419 kubeconfig.go:125] found "no-preload-297799" server: "https://192.168.39.120:8443"
	I0729 11:48:40.100676   69419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:40.111832   69419 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I0729 11:48:40.111874   69419 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:40.111885   69419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:40.111927   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.151936   69419 cri.go:89] found id: ""
	I0729 11:48:40.152000   69419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:40.170773   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:40.181342   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:40.181363   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:40.181408   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:40.190984   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:40.191052   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:40.200668   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:40.209597   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:40.209645   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:40.219194   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.228788   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:40.228861   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.238965   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:40.248308   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:40.248390   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:40.257904   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:40.267645   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:40.379761   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.272628   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.487426   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.563792   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.657159   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:41.657265   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.158209   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.657442   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.712325   69419 api_server.go:72] duration metric: took 1.055172636s to wait for apiserver process to appear ...
	I0729 11:48:42.712357   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:48:42.712378   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:40.978804   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:42.979615   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:40.296481   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:40.796161   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.296479   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.796634   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.296314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.796986   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.297060   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.296048   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.796488   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.619558   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.619623   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.619639   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.629929   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.629961   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.713181   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.764383   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.764415   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:46.213129   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.217584   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.217613   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:46.713358   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.719382   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.719421   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:47.212915   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:47.218414   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:48:47.230158   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:48:47.230187   69419 api_server.go:131] duration metric: took 4.517823741s to wait for apiserver health ...
	I0729 11:48:47.230197   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:47.230203   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:47.232409   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:48:44.015604   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:46.514213   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:48.514660   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.233803   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:48:47.254784   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:48:47.278258   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:48:47.307307   69419 system_pods.go:59] 8 kube-system pods found
	I0729 11:48:47.307354   69419 system_pods.go:61] "coredns-5cfdc65f69-qz5f7" [12c37abb-1db8-4c96-8bf7-be9487c821df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:48:47.307368   69419 system_pods.go:61] "etcd-no-preload-297799" [95565d29-e8c5-4f33-84d9-a2604d25440d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:48:47.307380   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [870e0ec0-87db-4fee-b8ba-d08654d036de] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:48:47.307389   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [12bf09f7-8084-47fb-b268-c9eccf906ce8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:48:47.307397   69419 system_pods.go:61] "kube-proxy-ggh4w" [5455f099-4470-4551-864e-5e855b77f94f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:48:47.307405   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [e88dae86-cfc6-456f-b14a-ebaaeac5f758] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:48:47.307416   69419 system_pods.go:61] "metrics-server-78fcd8795b-x4t76" [874f9fbe-8ded-48ba-993d-53cbded78379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:48:47.307423   69419 system_pods.go:61] "storage-provisioner" [8ca54feb-faf5-4e75-aef5-b7c57b89c429] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:48:47.307434   69419 system_pods.go:74] duration metric: took 29.153842ms to wait for pod list to return data ...
	I0729 11:48:47.307447   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:48:47.324625   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:48:47.324677   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:48:47.324691   69419 node_conditions.go:105] duration metric: took 17.237885ms to run NodePressure ...
	I0729 11:48:47.324711   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:47.612726   69419 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619335   69419 kubeadm.go:739] kubelet initialised
	I0729 11:48:47.619356   69419 kubeadm.go:740] duration metric: took 6.608982ms waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619364   69419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:48:47.625462   69419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:45.479610   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.481743   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.978596   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:45.297079   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.796411   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.297077   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.796676   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.296378   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.796359   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.296252   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.796407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.296669   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.796307   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.516689   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:53.016717   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.632321   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.131647   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.633099   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:52.633127   69419 pod_ready.go:81] duration metric: took 5.007638065s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.633136   69419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.480576   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.979758   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:50.296349   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.796922   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.296977   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.797050   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.296223   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.796774   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.296621   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.796300   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.296480   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.796902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.515017   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:57.515244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.640065   69419 pod_ready.go:102] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.648288   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.648318   69419 pod_ready.go:81] duration metric: took 4.015175534s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.648327   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.653979   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.654012   69419 pod_ready.go:81] duration metric: took 5.676586ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.654027   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664507   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.664533   69419 pod_ready.go:81] duration metric: took 10.499453ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664544   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669414   69419 pod_ready.go:92] pod "kube-proxy-ggh4w" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.669439   69419 pod_ready.go:81] duration metric: took 4.888994ms for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669449   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673888   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.673913   69419 pod_ready.go:81] duration metric: took 4.457007ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673924   69419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:58.682501   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.982680   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:59.479587   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:55.296128   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.796141   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.296196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.796435   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.296155   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.796741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.796190   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.296902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.797062   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.013753   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:02.014435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.180620   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.183481   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.481530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.978979   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:00.296074   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.796430   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.296402   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.796722   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.296594   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.297020   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.796865   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.296072   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.796318   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.015636   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:06.514933   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.681102   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.681462   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.979240   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.979773   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.979865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.296957   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:05.796485   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.296051   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.296457   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.796342   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.796449   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.297078   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.796713   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.014934   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:11.515032   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:13.515665   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.683191   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.181155   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.182012   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.482327   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.979064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:10.296959   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:10.796677   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.296975   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.796715   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.296262   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.796937   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:13.296939   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:13.297014   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:13.340009   70480 cri.go:89] found id: ""
	I0729 11:49:13.340034   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.340041   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:13.340047   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:13.340112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:13.378003   70480 cri.go:89] found id: ""
	I0729 11:49:13.378029   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.378037   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:13.378044   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:13.378098   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:13.417130   70480 cri.go:89] found id: ""
	I0729 11:49:13.417162   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.417169   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:13.417175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:13.417253   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:13.461969   70480 cri.go:89] found id: ""
	I0729 11:49:13.462001   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.462012   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:13.462019   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:13.462089   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:13.502341   70480 cri.go:89] found id: ""
	I0729 11:49:13.502369   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.502375   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:13.502382   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:13.502434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:13.538099   70480 cri.go:89] found id: ""
	I0729 11:49:13.538123   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.538137   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:13.538143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:13.538192   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:13.575136   70480 cri.go:89] found id: ""
	I0729 11:49:13.575165   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.575172   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:13.575180   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:13.575241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:13.609066   70480 cri.go:89] found id: ""
	I0729 11:49:13.609098   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.609106   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:13.609114   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:13.609126   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:13.622813   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:13.622842   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:13.765497   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:13.765519   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:13.765532   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:13.831517   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:13.831554   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:13.877738   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:13.877771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.015086   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:18.514995   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.683827   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.180229   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.979975   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.479362   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.429475   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:16.447454   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:16.447528   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:16.486991   70480 cri.go:89] found id: ""
	I0729 11:49:16.487018   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.487028   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:16.487036   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:16.487095   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:16.526155   70480 cri.go:89] found id: ""
	I0729 11:49:16.526180   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.526187   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:16.526192   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:16.526251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:16.562054   70480 cri.go:89] found id: ""
	I0729 11:49:16.562080   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.562156   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:16.562175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:16.562229   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:16.598866   70480 cri.go:89] found id: ""
	I0729 11:49:16.598896   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.598907   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:16.598915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:16.598984   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:16.637590   70480 cri.go:89] found id: ""
	I0729 11:49:16.637615   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.637623   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:16.637628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:16.637677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:16.676712   70480 cri.go:89] found id: ""
	I0729 11:49:16.676738   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.676749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:16.676756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:16.676844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:16.717213   70480 cri.go:89] found id: ""
	I0729 11:49:16.717242   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.717250   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:16.717256   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:16.717309   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:16.755619   70480 cri.go:89] found id: ""
	I0729 11:49:16.755644   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.755652   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:16.755660   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:16.755677   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:16.825987   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:16.826023   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:16.869351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:16.869386   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.920850   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:16.920888   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:16.935884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:16.935921   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:17.021524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.521810   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:19.534761   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:19.534826   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:19.571988   70480 cri.go:89] found id: ""
	I0729 11:49:19.572019   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.572029   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:19.572037   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:19.572097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:19.607389   70480 cri.go:89] found id: ""
	I0729 11:49:19.607418   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.607427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:19.607434   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:19.607496   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:19.645810   70480 cri.go:89] found id: ""
	I0729 11:49:19.645842   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.645853   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:19.645861   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:19.645924   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:19.683628   70480 cri.go:89] found id: ""
	I0729 11:49:19.683655   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.683663   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:19.683669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:19.683715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:19.719396   70480 cri.go:89] found id: ""
	I0729 11:49:19.719424   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.719435   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:19.719442   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:19.719503   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:19.755341   70480 cri.go:89] found id: ""
	I0729 11:49:19.755372   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.755383   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:19.755390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:19.755443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:19.792562   70480 cri.go:89] found id: ""
	I0729 11:49:19.792594   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.792604   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:19.792611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:19.792674   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:19.826783   70480 cri.go:89] found id: ""
	I0729 11:49:19.826808   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.826815   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:19.826824   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:19.826835   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:19.878538   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:19.878573   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:19.893066   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:19.893094   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:19.966152   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.966177   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:19.966190   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:20.042796   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:20.042831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:20.515422   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.016350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.681192   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.681786   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.486048   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.979078   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:22.581639   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:22.595713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:22.595791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:22.639198   70480 cri.go:89] found id: ""
	I0729 11:49:22.639227   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.639239   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:22.639247   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:22.639304   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:22.681073   70480 cri.go:89] found id: ""
	I0729 11:49:22.681103   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.681117   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:22.681124   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:22.681183   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:22.717186   70480 cri.go:89] found id: ""
	I0729 11:49:22.717216   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.717226   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:22.717233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:22.717293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:22.755508   70480 cri.go:89] found id: ""
	I0729 11:49:22.755536   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.755546   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:22.755563   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:22.755626   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:22.800450   70480 cri.go:89] found id: ""
	I0729 11:49:22.800484   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.800495   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:22.800503   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:22.800567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:22.845555   70480 cri.go:89] found id: ""
	I0729 11:49:22.845581   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.845588   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:22.845594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:22.845643   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:22.895449   70480 cri.go:89] found id: ""
	I0729 11:49:22.895476   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.895483   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:22.895488   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:22.895536   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:22.943041   70480 cri.go:89] found id: ""
	I0729 11:49:22.943071   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.943081   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:22.943092   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:22.943108   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:23.002403   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:23.002448   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:23.019436   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:23.019463   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:23.090680   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:23.090718   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:23.090734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:23.173647   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:23.173687   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:25.515416   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.014796   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.181898   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.680932   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.481482   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.980230   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:25.719961   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:25.733760   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:25.733829   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:25.769965   70480 cri.go:89] found id: ""
	I0729 11:49:25.769997   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.770008   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:25.770015   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:25.770079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:25.805786   70480 cri.go:89] found id: ""
	I0729 11:49:25.805818   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.805829   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:25.805836   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:25.805899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:25.841948   70480 cri.go:89] found id: ""
	I0729 11:49:25.841978   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.841988   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:25.841996   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:25.842056   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:25.880600   70480 cri.go:89] found id: ""
	I0729 11:49:25.880626   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.880636   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:25.880644   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:25.880710   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:25.918637   70480 cri.go:89] found id: ""
	I0729 11:49:25.918671   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.918683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:25.918691   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:25.918766   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:25.955683   70480 cri.go:89] found id: ""
	I0729 11:49:25.955716   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.955726   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:25.955733   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:25.955793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:25.991796   70480 cri.go:89] found id: ""
	I0729 11:49:25.991826   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.991835   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:25.991844   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:25.991908   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:26.027595   70480 cri.go:89] found id: ""
	I0729 11:49:26.027623   70480 logs.go:276] 0 containers: []
	W0729 11:49:26.027634   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:26.027644   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:26.027658   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:26.114463   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:26.114500   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:26.156798   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:26.156834   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:26.206910   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:26.206940   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:26.221037   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:26.221065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:26.293788   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:28.794321   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:28.809573   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:28.809632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:28.860918   70480 cri.go:89] found id: ""
	I0729 11:49:28.860945   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.860952   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:28.860958   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:28.861011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:28.913038   70480 cri.go:89] found id: ""
	I0729 11:49:28.913069   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.913078   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:28.913085   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:28.913147   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:28.963679   70480 cri.go:89] found id: ""
	I0729 11:49:28.963704   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.963714   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:28.963722   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:28.963787   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:29.003935   70480 cri.go:89] found id: ""
	I0729 11:49:29.003962   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.003970   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:29.003976   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:29.004033   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:29.044990   70480 cri.go:89] found id: ""
	I0729 11:49:29.045027   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.045034   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:29.045040   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:29.045096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:29.079923   70480 cri.go:89] found id: ""
	I0729 11:49:29.079945   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.079953   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:29.079958   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:29.080004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:29.117495   70480 cri.go:89] found id: ""
	I0729 11:49:29.117520   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.117528   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:29.117534   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:29.117580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:29.155254   70480 cri.go:89] found id: ""
	I0729 11:49:29.155285   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.155295   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:29.155305   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:29.155319   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:29.207659   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:29.207698   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:29.221875   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:29.221904   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:29.295613   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:29.295637   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:29.295647   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:29.376114   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:29.376148   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:30.515987   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.015616   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:30.687554   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.180446   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.480064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.480740   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.916592   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:31.930301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:31.930373   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:31.969552   70480 cri.go:89] found id: ""
	I0729 11:49:31.969580   70480 logs.go:276] 0 containers: []
	W0729 11:49:31.969588   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:31.969594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:31.969650   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:32.003765   70480 cri.go:89] found id: ""
	I0729 11:49:32.003795   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.003804   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:32.003811   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:32.003873   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:32.038447   70480 cri.go:89] found id: ""
	I0729 11:49:32.038475   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.038486   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:32.038492   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:32.038558   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:32.077766   70480 cri.go:89] found id: ""
	I0729 11:49:32.077793   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.077805   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:32.077813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:32.077866   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:32.112599   70480 cri.go:89] found id: ""
	I0729 11:49:32.112630   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.112640   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:32.112648   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:32.112711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:32.150385   70480 cri.go:89] found id: ""
	I0729 11:49:32.150410   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.150417   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:32.150423   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:32.150481   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:32.185148   70480 cri.go:89] found id: ""
	I0729 11:49:32.185172   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.185182   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:32.185189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:32.185251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:32.219652   70480 cri.go:89] found id: ""
	I0729 11:49:32.219685   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.219696   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:32.219706   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:32.219720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:32.233440   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:32.233468   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:32.300495   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:32.300523   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:32.300540   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:32.381361   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:32.381400   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:32.422678   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:32.422730   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:34.974183   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:34.987832   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:34.987912   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:35.028216   70480 cri.go:89] found id: ""
	I0729 11:49:35.028251   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.028262   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:35.028269   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:35.028333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:35.067585   70480 cri.go:89] found id: ""
	I0729 11:49:35.067616   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.067626   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:35.067634   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:35.067698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:35.103319   70480 cri.go:89] found id: ""
	I0729 11:49:35.103346   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.103355   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:35.103362   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:35.103426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:35.138637   70480 cri.go:89] found id: ""
	I0729 11:49:35.138673   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.138714   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:35.138726   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:35.138804   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:35.176249   70480 cri.go:89] found id: ""
	I0729 11:49:35.176285   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.176293   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:35.176298   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:35.176358   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:35.218168   70480 cri.go:89] found id: ""
	I0729 11:49:35.218194   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.218202   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:35.218208   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:35.218265   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:35.515188   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.518451   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.180771   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.181078   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.979448   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:38.482849   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.253594   70480 cri.go:89] found id: ""
	I0729 11:49:35.253634   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.253641   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:35.253655   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:35.253716   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:35.289229   70480 cri.go:89] found id: ""
	I0729 11:49:35.289258   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.289269   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:35.289279   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:35.289294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:35.341152   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:35.341186   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:35.355884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:35.355925   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:35.426135   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:35.426160   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:35.426172   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:35.508387   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:35.508422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.047364   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:38.061026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:38.061088   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:38.096252   70480 cri.go:89] found id: ""
	I0729 11:49:38.096280   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.096290   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:38.096297   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:38.096365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:38.133007   70480 cri.go:89] found id: ""
	I0729 11:49:38.133040   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.133051   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:38.133058   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:38.133130   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:38.168040   70480 cri.go:89] found id: ""
	I0729 11:49:38.168063   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.168073   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:38.168086   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:38.168160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:38.205440   70480 cri.go:89] found id: ""
	I0729 11:49:38.205464   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.205471   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:38.205476   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:38.205524   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:38.243345   70480 cri.go:89] found id: ""
	I0729 11:49:38.243373   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.243383   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:38.243390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:38.243449   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:38.278506   70480 cri.go:89] found id: ""
	I0729 11:49:38.278537   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.278549   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:38.278557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:38.278616   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:38.317917   70480 cri.go:89] found id: ""
	I0729 11:49:38.317951   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.317962   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:38.317970   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:38.318032   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:38.353198   70480 cri.go:89] found id: ""
	I0729 11:49:38.353228   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.353236   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:38.353245   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:38.353259   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:38.367239   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:38.367268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:38.445964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:38.445989   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:38.446002   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:38.528232   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:38.528268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.565958   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:38.565992   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:40.014625   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:42.015244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:39.682072   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.682635   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:44.180224   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:40.979943   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:43.481875   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.123139   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:41.137233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:41.137299   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:41.179468   70480 cri.go:89] found id: ""
	I0729 11:49:41.179492   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.179502   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:41.179508   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:41.179559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:41.215586   70480 cri.go:89] found id: ""
	I0729 11:49:41.215612   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.215620   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:41.215625   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:41.215682   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:41.259473   70480 cri.go:89] found id: ""
	I0729 11:49:41.259495   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.259503   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:41.259508   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:41.259562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:41.302914   70480 cri.go:89] found id: ""
	I0729 11:49:41.302939   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.302947   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:41.302953   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:41.303012   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:41.339826   70480 cri.go:89] found id: ""
	I0729 11:49:41.339857   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.339868   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:41.339876   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:41.339944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:41.376044   70480 cri.go:89] found id: ""
	I0729 11:49:41.376067   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.376074   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:41.376080   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:41.376126   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:41.412216   70480 cri.go:89] found id: ""
	I0729 11:49:41.412241   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.412249   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:41.412255   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:41.412311   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:41.448264   70480 cri.go:89] found id: ""
	I0729 11:49:41.448294   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.448305   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:41.448315   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:41.448331   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:41.499936   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:41.499974   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:41.517126   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:41.517151   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:41.590153   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:41.590185   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:41.590201   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:41.670830   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:41.670866   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.212782   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:44.226750   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:44.226815   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:44.261492   70480 cri.go:89] found id: ""
	I0729 11:49:44.261517   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.261524   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:44.261530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:44.261577   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:44.296391   70480 cri.go:89] found id: ""
	I0729 11:49:44.296426   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.296435   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:44.296444   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:44.296510   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:44.333335   70480 cri.go:89] found id: ""
	I0729 11:49:44.333365   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.333377   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:44.333384   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:44.333447   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:44.370607   70480 cri.go:89] found id: ""
	I0729 11:49:44.370639   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.370650   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:44.370657   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:44.370734   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:44.404229   70480 cri.go:89] found id: ""
	I0729 11:49:44.404257   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.404265   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:44.404271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:44.404332   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:44.439198   70480 cri.go:89] found id: ""
	I0729 11:49:44.439227   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.439238   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:44.439244   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:44.439302   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:44.473835   70480 cri.go:89] found id: ""
	I0729 11:49:44.473887   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.473899   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:44.473908   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:44.473971   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:44.518006   70480 cri.go:89] found id: ""
	I0729 11:49:44.518031   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.518040   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:44.518050   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:44.518065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.560188   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:44.560222   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:44.609565   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:44.609602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:44.624787   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:44.624826   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:44.707388   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:44.707410   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:44.707422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:44.515480   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.013967   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:46.181170   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:48.680460   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:45.482413   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.484420   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:49.982145   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.283951   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:47.297013   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:47.297080   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:47.331979   70480 cri.go:89] found id: ""
	I0729 11:49:47.332009   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.332018   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:47.332023   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:47.332071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:47.367888   70480 cri.go:89] found id: ""
	I0729 11:49:47.367914   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.367925   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:47.367931   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:47.367991   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:47.409364   70480 cri.go:89] found id: ""
	I0729 11:49:47.409392   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.409404   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:47.409410   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:47.409462   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:47.442554   70480 cri.go:89] found id: ""
	I0729 11:49:47.442583   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.442594   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:47.442602   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:47.442656   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:47.476662   70480 cri.go:89] found id: ""
	I0729 11:49:47.476692   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.476704   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:47.476713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:47.476775   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:47.513779   70480 cri.go:89] found id: ""
	I0729 11:49:47.513809   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.513819   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:47.513827   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:47.513885   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:47.550023   70480 cri.go:89] found id: ""
	I0729 11:49:47.550047   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.550053   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:47.550059   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:47.550120   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:47.586131   70480 cri.go:89] found id: ""
	I0729 11:49:47.586157   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.586165   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:47.586174   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:47.586187   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:47.671326   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:47.671365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:47.710573   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:47.710601   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:47.763248   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:47.763284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:47.779516   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:47.779545   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:47.857474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:49.014878   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:51.515152   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.515473   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.682492   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.179515   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:52.479384   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:54.980972   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.358275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:50.371438   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:50.371501   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:50.408776   70480 cri.go:89] found id: ""
	I0729 11:49:50.408803   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.408813   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:50.408820   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:50.408881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:50.443502   70480 cri.go:89] found id: ""
	I0729 11:49:50.443528   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.443536   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:50.443541   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:50.443600   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:50.480429   70480 cri.go:89] found id: ""
	I0729 11:49:50.480454   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.480463   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:50.480470   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:50.480525   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:50.518753   70480 cri.go:89] found id: ""
	I0729 11:49:50.518779   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.518789   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:50.518796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:50.518838   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:50.555970   70480 cri.go:89] found id: ""
	I0729 11:49:50.556000   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.556010   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:50.556022   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:50.556086   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:50.592342   70480 cri.go:89] found id: ""
	I0729 11:49:50.592374   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.592385   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:50.592392   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:50.592458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:50.628772   70480 cri.go:89] found id: ""
	I0729 11:49:50.628801   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.628813   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:50.628859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:50.628919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:50.677549   70480 cri.go:89] found id: ""
	I0729 11:49:50.677579   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.677588   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:50.677598   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:50.677612   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:50.734543   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:50.734579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:50.749418   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:50.749445   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:50.825728   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:50.825754   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:50.825773   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:50.901579   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:50.901615   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.439920   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:53.453322   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:53.453381   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:53.491594   70480 cri.go:89] found id: ""
	I0729 11:49:53.491622   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.491632   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:53.491638   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:53.491698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:53.527160   70480 cri.go:89] found id: ""
	I0729 11:49:53.527188   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.527201   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:53.527207   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:53.527264   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:53.563787   70480 cri.go:89] found id: ""
	I0729 11:49:53.563819   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.563830   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:53.563838   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:53.563899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:53.601551   70480 cri.go:89] found id: ""
	I0729 11:49:53.601575   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.601583   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:53.601589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:53.601634   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:53.636711   70480 cri.go:89] found id: ""
	I0729 11:49:53.636738   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.636748   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:53.636755   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:53.636824   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:53.677809   70480 cri.go:89] found id: ""
	I0729 11:49:53.677852   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.677864   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:53.677872   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:53.677932   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:53.714548   70480 cri.go:89] found id: ""
	I0729 11:49:53.714579   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.714590   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:53.714597   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:53.714663   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:53.751406   70480 cri.go:89] found id: ""
	I0729 11:49:53.751438   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.751448   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:53.751459   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:53.751474   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:53.834905   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:53.834942   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.880818   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:53.880852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:53.935913   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:53.935948   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:53.950053   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:53.950078   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:54.027378   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.014381   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:58.513958   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:55.180502   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.181274   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.182119   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.479530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.981806   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:56.528379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:56.541859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:56.541930   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:56.580566   70480 cri.go:89] found id: ""
	I0729 11:49:56.580612   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.580621   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:56.580629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:56.580687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:56.616390   70480 cri.go:89] found id: ""
	I0729 11:49:56.616419   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.616427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:56.616433   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:56.616483   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:56.653245   70480 cri.go:89] found id: ""
	I0729 11:49:56.653273   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.653281   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:56.653286   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:56.653345   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:56.693011   70480 cri.go:89] found id: ""
	I0729 11:49:56.693033   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.693041   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:56.693047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:56.693115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:56.729684   70480 cri.go:89] found id: ""
	I0729 11:49:56.729714   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.729723   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:56.729736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:56.729799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:56.764634   70480 cri.go:89] found id: ""
	I0729 11:49:56.764675   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.764684   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:56.764692   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:56.764753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:56.800672   70480 cri.go:89] found id: ""
	I0729 11:49:56.800703   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.800714   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:56.800721   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:56.800784   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:56.838727   70480 cri.go:89] found id: ""
	I0729 11:49:56.838758   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.838769   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:56.838781   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:56.838794   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:56.918017   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.918043   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:56.918057   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:57.011900   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:57.011951   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:57.055320   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:57.055350   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:57.113681   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:57.113725   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:59.629516   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:59.643794   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:59.643872   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:59.682543   70480 cri.go:89] found id: ""
	I0729 11:49:59.682571   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.682580   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:59.682586   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:59.682649   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:59.719313   70480 cri.go:89] found id: ""
	I0729 11:49:59.719341   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.719352   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:59.719360   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:59.719415   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:59.759567   70480 cri.go:89] found id: ""
	I0729 11:49:59.759593   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.759603   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:59.759611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:59.759668   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:59.796159   70480 cri.go:89] found id: ""
	I0729 11:49:59.796180   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.796187   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:59.796192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:59.796247   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:59.836178   70480 cri.go:89] found id: ""
	I0729 11:49:59.836199   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.836207   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:59.836212   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:59.836263   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:59.876751   70480 cri.go:89] found id: ""
	I0729 11:49:59.876783   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.876795   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:59.876802   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:59.876863   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:59.916163   70480 cri.go:89] found id: ""
	I0729 11:49:59.916196   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.916207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:59.916217   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:59.916281   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:59.953581   70480 cri.go:89] found id: ""
	I0729 11:49:59.953611   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.953621   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:59.953631   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:59.953649   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:00.009128   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:00.009167   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:00.024681   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:00.024710   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:00.098939   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:00.098966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:00.098980   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:00.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:00.186125   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:01.015333   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:03.017456   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:01.682621   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.180814   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.480490   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.481157   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.726382   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:02.740727   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:02.740799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:02.779624   70480 cri.go:89] found id: ""
	I0729 11:50:02.779653   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.779664   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:02.779672   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:02.779731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:02.816044   70480 cri.go:89] found id: ""
	I0729 11:50:02.816076   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.816087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:02.816094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:02.816168   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:02.859406   70480 cri.go:89] found id: ""
	I0729 11:50:02.859434   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.859445   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:02.859453   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:02.859514   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:02.897019   70480 cri.go:89] found id: ""
	I0729 11:50:02.897049   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.897058   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:02.897064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:02.897123   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:02.936810   70480 cri.go:89] found id: ""
	I0729 11:50:02.936843   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.936854   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:02.936860   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:02.936919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:02.973391   70480 cri.go:89] found id: ""
	I0729 11:50:02.973412   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.973420   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:02.973426   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:02.973485   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:03.010867   70480 cri.go:89] found id: ""
	I0729 11:50:03.010961   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.010983   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:03.011001   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:03.011082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:03.048287   70480 cri.go:89] found id: ""
	I0729 11:50:03.048321   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.048332   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:03.048343   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:03.048360   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:03.102752   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:03.102790   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:03.117732   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:03.117759   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:03.193620   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:03.193643   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:03.193655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:03.277205   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:03.277250   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:05.513602   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:07.514141   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.181449   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:08.682052   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.980021   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:09.479308   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:05.833546   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:05.849129   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:05.849191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:05.893059   70480 cri.go:89] found id: ""
	I0729 11:50:05.893094   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.893105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:05.893113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:05.893182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:05.927844   70480 cri.go:89] found id: ""
	I0729 11:50:05.927879   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.927889   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:05.927896   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:05.927960   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:05.964427   70480 cri.go:89] found id: ""
	I0729 11:50:05.964451   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.964458   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:05.964464   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:05.964509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:06.001963   70480 cri.go:89] found id: ""
	I0729 11:50:06.001995   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.002002   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:06.002008   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:06.002055   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:06.038838   70480 cri.go:89] found id: ""
	I0729 11:50:06.038869   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.038880   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:06.038888   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:06.038948   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:06.073952   70480 cri.go:89] found id: ""
	I0729 11:50:06.073985   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.073995   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:06.074003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:06.074063   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:06.110495   70480 cri.go:89] found id: ""
	I0729 11:50:06.110524   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.110535   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:06.110541   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:06.110603   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:06.146851   70480 cri.go:89] found id: ""
	I0729 11:50:06.146893   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.146904   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:06.146915   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:06.146931   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:06.201779   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:06.201814   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:06.216407   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:06.216434   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:06.294362   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:06.294382   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:06.294394   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:06.374381   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:06.374415   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:08.920326   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:08.933875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:08.933940   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:08.974589   70480 cri.go:89] found id: ""
	I0729 11:50:08.974615   70480 logs.go:276] 0 containers: []
	W0729 11:50:08.974623   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:08.974629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:08.974691   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:09.013032   70480 cri.go:89] found id: ""
	I0729 11:50:09.013056   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.013066   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:09.013075   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:09.013121   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:09.052365   70480 cri.go:89] found id: ""
	I0729 11:50:09.052390   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.052397   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:09.052402   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:09.052450   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:09.098671   70480 cri.go:89] found id: ""
	I0729 11:50:09.098719   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.098731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:09.098739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:09.098799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:09.148033   70480 cri.go:89] found id: ""
	I0729 11:50:09.148062   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.148083   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:09.148091   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:09.148151   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:09.195140   70480 cri.go:89] found id: ""
	I0729 11:50:09.195172   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.195179   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:09.195185   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:09.195244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:09.242315   70480 cri.go:89] found id: ""
	I0729 11:50:09.242346   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.242356   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:09.242364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:09.242428   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:09.278315   70480 cri.go:89] found id: ""
	I0729 11:50:09.278342   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.278353   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:09.278364   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:09.278377   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:09.327622   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:09.327654   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:09.342383   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:09.342416   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:09.420797   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:09.420821   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:09.420832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:09.499308   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:09.499345   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:09.514809   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.515103   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.515311   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.181981   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.681128   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.480200   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.480991   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:12.042649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:12.057927   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:12.057996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:12.096129   70480 cri.go:89] found id: ""
	I0729 11:50:12.096159   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.096170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:12.096177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:12.096244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:12.133849   70480 cri.go:89] found id: ""
	I0729 11:50:12.133880   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.133891   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:12.133898   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:12.133963   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:12.171706   70480 cri.go:89] found id: ""
	I0729 11:50:12.171730   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.171738   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:12.171744   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:12.171810   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:12.211248   70480 cri.go:89] found id: ""
	I0729 11:50:12.211285   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.211307   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:12.211315   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:12.211379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:12.247472   70480 cri.go:89] found id: ""
	I0729 11:50:12.247500   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.247510   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:12.247517   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:12.247578   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:12.283818   70480 cri.go:89] found id: ""
	I0729 11:50:12.283847   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.283859   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:12.283866   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:12.283937   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:12.324453   70480 cri.go:89] found id: ""
	I0729 11:50:12.324478   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.324485   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:12.324490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:12.324541   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:12.362504   70480 cri.go:89] found id: ""
	I0729 11:50:12.362531   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.362538   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:12.362546   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:12.362558   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:12.439250   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:12.439278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:12.439295   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:12.521240   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:12.521271   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:12.561881   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:12.561918   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:12.615509   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:12.615548   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:15.130823   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:15.151388   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:15.151457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:15.209596   70480 cri.go:89] found id: ""
	I0729 11:50:15.209645   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.209658   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:15.209668   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:15.209736   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:15.515486   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:18.014350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.681466   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.686021   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.979592   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.980955   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.248351   70480 cri.go:89] found id: ""
	I0729 11:50:15.248383   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.248394   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:15.248402   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:15.248459   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:15.287261   70480 cri.go:89] found id: ""
	I0729 11:50:15.287288   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.287296   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:15.287301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:15.287356   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:15.325197   70480 cri.go:89] found id: ""
	I0729 11:50:15.325221   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.325229   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:15.325234   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:15.325292   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:15.361903   70480 cri.go:89] found id: ""
	I0729 11:50:15.361930   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.361939   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:15.361944   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:15.361994   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:15.399963   70480 cri.go:89] found id: ""
	I0729 11:50:15.399996   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.400007   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:15.400015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:15.400079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:15.437366   70480 cri.go:89] found id: ""
	I0729 11:50:15.437400   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.437409   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:15.437414   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:15.437476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:15.473784   70480 cri.go:89] found id: ""
	I0729 11:50:15.473810   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.473827   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:15.473837   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:15.473852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:15.550294   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:15.550328   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:15.550343   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:15.636252   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:15.636297   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:15.681361   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:15.681397   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:15.735415   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:15.735451   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.250767   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:18.265247   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:18.265319   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:18.302793   70480 cri.go:89] found id: ""
	I0729 11:50:18.302819   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.302827   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:18.302833   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:18.302894   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:18.345507   70480 cri.go:89] found id: ""
	I0729 11:50:18.345541   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.345551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:18.345558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:18.345621   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:18.381646   70480 cri.go:89] found id: ""
	I0729 11:50:18.381675   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.381682   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:18.381688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:18.381750   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:18.417233   70480 cri.go:89] found id: ""
	I0729 11:50:18.417261   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.417268   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:18.417275   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:18.417340   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:18.453506   70480 cri.go:89] found id: ""
	I0729 11:50:18.453534   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.453541   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:18.453547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:18.453598   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:18.491886   70480 cri.go:89] found id: ""
	I0729 11:50:18.491910   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.491918   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:18.491923   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:18.491980   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:18.528413   70480 cri.go:89] found id: ""
	I0729 11:50:18.528444   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.528454   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:18.528462   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:18.528518   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:18.565621   70480 cri.go:89] found id: ""
	I0729 11:50:18.565653   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.565663   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:18.565673   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:18.565690   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:18.616796   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:18.616832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.631175   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:18.631202   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:18.712480   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:18.712506   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:18.712520   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:18.797246   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:18.797284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:20.514492   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:23.016174   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.181252   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.682450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.480316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.980474   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:21.344260   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:21.358689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:21.358781   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:21.393248   70480 cri.go:89] found id: ""
	I0729 11:50:21.393276   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.393286   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:21.393293   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:21.393352   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:21.426960   70480 cri.go:89] found id: ""
	I0729 11:50:21.426989   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.426999   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:21.427007   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:21.427066   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:21.463521   70480 cri.go:89] found id: ""
	I0729 11:50:21.463545   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.463553   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:21.463559   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:21.463612   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:21.501915   70480 cri.go:89] found id: ""
	I0729 11:50:21.501950   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.501960   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:21.501966   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:21.502023   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:21.538208   70480 cri.go:89] found id: ""
	I0729 11:50:21.538247   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.538258   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:21.538265   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:21.538327   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:21.572572   70480 cri.go:89] found id: ""
	I0729 11:50:21.572594   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.572602   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:21.572607   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:21.572664   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:21.608002   70480 cri.go:89] found id: ""
	I0729 11:50:21.608037   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.608046   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:21.608053   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:21.608103   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:21.643039   70480 cri.go:89] found id: ""
	I0729 11:50:21.643069   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.643080   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:21.643098   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:21.643115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:21.722921   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:21.722960   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:21.768597   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:21.768628   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:21.826974   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:21.827009   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:21.842214   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:21.842242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:21.913217   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:24.413629   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:24.428364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:24.428458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:24.469631   70480 cri.go:89] found id: ""
	I0729 11:50:24.469654   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.469661   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:24.469667   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:24.469712   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:24.508201   70480 cri.go:89] found id: ""
	I0729 11:50:24.508231   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.508242   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:24.508254   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:24.508317   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:24.543992   70480 cri.go:89] found id: ""
	I0729 11:50:24.544020   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.544028   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:24.544034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:24.544082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:24.579956   70480 cri.go:89] found id: ""
	I0729 11:50:24.579983   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.579990   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:24.579995   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:24.580051   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:24.616233   70480 cri.go:89] found id: ""
	I0729 11:50:24.616259   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.616267   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:24.616273   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:24.616339   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:24.655131   70480 cri.go:89] found id: ""
	I0729 11:50:24.655159   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.655167   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:24.655173   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:24.655223   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:24.701706   70480 cri.go:89] found id: ""
	I0729 11:50:24.701730   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.701738   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:24.701743   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:24.701799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:24.739729   70480 cri.go:89] found id: ""
	I0729 11:50:24.739754   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.739762   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:24.739773   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:24.739785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:24.817347   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:24.817390   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:24.858248   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:24.858274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:24.911486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:24.911527   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:24.927180   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:24.927209   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:25.007474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:25.515125   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.515919   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:24.682503   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.180867   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:29.181299   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:25.478971   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.979128   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.507887   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:27.521936   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:27.522004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:27.555832   70480 cri.go:89] found id: ""
	I0729 11:50:27.555865   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.555875   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:27.555882   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:27.555944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:27.590482   70480 cri.go:89] found id: ""
	I0729 11:50:27.590509   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.590518   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:27.590526   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:27.590587   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:27.626998   70480 cri.go:89] found id: ""
	I0729 11:50:27.627028   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.627038   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:27.627045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:27.627105   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:27.661980   70480 cri.go:89] found id: ""
	I0729 11:50:27.662015   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.662027   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:27.662034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:27.662096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:27.696646   70480 cri.go:89] found id: ""
	I0729 11:50:27.696675   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.696683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:27.696689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:27.696735   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:27.739393   70480 cri.go:89] found id: ""
	I0729 11:50:27.739421   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.739432   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:27.739439   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:27.739500   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:27.774985   70480 cri.go:89] found id: ""
	I0729 11:50:27.775013   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.775027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:27.775034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:27.775097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:27.811526   70480 cri.go:89] found id: ""
	I0729 11:50:27.811555   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.811567   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:27.811578   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:27.811594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:27.866445   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:27.866482   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:27.881961   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:27.881991   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:27.949524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:27.949543   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:27.949555   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:28.029386   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:28.029418   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:30.014858   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.515721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:31.183830   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:33.681416   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.479786   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.484195   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:34.978772   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.571850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:30.586163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:30.586241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:30.622065   70480 cri.go:89] found id: ""
	I0729 11:50:30.622096   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.622105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:30.622113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:30.622189   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:30.659346   70480 cri.go:89] found id: ""
	I0729 11:50:30.659386   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.659398   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:30.659405   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:30.659467   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:30.699376   70480 cri.go:89] found id: ""
	I0729 11:50:30.699403   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.699413   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:30.699421   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:30.699490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:30.734843   70480 cri.go:89] found id: ""
	I0729 11:50:30.734873   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.734892   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:30.734900   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:30.734974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:30.772983   70480 cri.go:89] found id: ""
	I0729 11:50:30.773010   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.773021   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:30.773028   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:30.773084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:30.814777   70480 cri.go:89] found id: ""
	I0729 11:50:30.814805   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.814815   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:30.814823   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:30.814891   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:30.851989   70480 cri.go:89] found id: ""
	I0729 11:50:30.852018   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.852027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:30.852036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:30.852094   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:30.891692   70480 cri.go:89] found id: ""
	I0729 11:50:30.891716   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.891732   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:30.891743   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:30.891758   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:30.943466   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:30.943498   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:30.957182   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:30.957208   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:31.029695   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:31.029717   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:31.029731   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:31.113329   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:31.113378   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:33.654275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:33.668509   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:33.668581   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:33.706103   70480 cri.go:89] found id: ""
	I0729 11:50:33.706133   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.706144   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:33.706151   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:33.706203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:33.743389   70480 cri.go:89] found id: ""
	I0729 11:50:33.743417   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.743424   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:33.743431   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:33.743482   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:33.781980   70480 cri.go:89] found id: ""
	I0729 11:50:33.782014   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.782025   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:33.782032   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:33.782092   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:33.818049   70480 cri.go:89] found id: ""
	I0729 11:50:33.818080   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.818090   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:33.818098   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:33.818164   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:33.854043   70480 cri.go:89] found id: ""
	I0729 11:50:33.854069   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.854077   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:33.854083   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:33.854144   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:33.891292   70480 cri.go:89] found id: ""
	I0729 11:50:33.891319   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.891329   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:33.891338   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:33.891400   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:33.927871   70480 cri.go:89] found id: ""
	I0729 11:50:33.927904   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.927915   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:33.927922   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:33.927979   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:33.964136   70480 cri.go:89] found id: ""
	I0729 11:50:33.964163   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.964170   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:33.964181   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:33.964195   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:34.015262   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:34.015292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:34.029677   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:34.029711   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:34.105907   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:34.105932   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:34.105945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:34.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:34.186120   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:35.014404   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:37.015435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:35.681610   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:38.181485   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.979912   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:39.480001   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.740552   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:36.754472   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:36.754533   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:36.793126   70480 cri.go:89] found id: ""
	I0729 11:50:36.793162   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.793170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:36.793178   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:36.793235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:36.831095   70480 cri.go:89] found id: ""
	I0729 11:50:36.831152   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.831166   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:36.831176   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:36.831235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:36.869236   70480 cri.go:89] found id: ""
	I0729 11:50:36.869266   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.869277   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:36.869284   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:36.869343   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:36.907163   70480 cri.go:89] found id: ""
	I0729 11:50:36.907195   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.907203   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:36.907209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:36.907267   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:36.943074   70480 cri.go:89] found id: ""
	I0729 11:50:36.943101   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.943110   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:36.943115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:36.943177   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:36.980411   70480 cri.go:89] found id: ""
	I0729 11:50:36.980432   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.980442   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:36.980449   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:36.980509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:37.017994   70480 cri.go:89] found id: ""
	I0729 11:50:37.018015   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.018028   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:37.018035   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:37.018091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:37.052761   70480 cri.go:89] found id: ""
	I0729 11:50:37.052787   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.052797   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:37.052806   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:37.052818   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:37.105925   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:37.105970   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:37.119829   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:37.119862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:37.198953   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:37.198992   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:37.199013   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:37.276947   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:37.276987   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:39.816196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:39.830387   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:39.830460   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:39.864879   70480 cri.go:89] found id: ""
	I0729 11:50:39.864914   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.864921   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:39.864927   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:39.864974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:39.902730   70480 cri.go:89] found id: ""
	I0729 11:50:39.902761   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.902772   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:39.902779   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:39.902832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:39.937630   70480 cri.go:89] found id: ""
	I0729 11:50:39.937656   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.937663   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:39.937669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:39.937718   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:39.972692   70480 cri.go:89] found id: ""
	I0729 11:50:39.972723   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.972731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:39.972736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:39.972798   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:40.016152   70480 cri.go:89] found id: ""
	I0729 11:50:40.016179   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.016187   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:40.016192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:40.016239   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:40.051206   70480 cri.go:89] found id: ""
	I0729 11:50:40.051233   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.051243   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:40.051249   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:40.051310   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:40.091007   70480 cri.go:89] found id: ""
	I0729 11:50:40.091039   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.091050   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:40.091057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:40.091122   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:40.130939   70480 cri.go:89] found id: ""
	I0729 11:50:40.130968   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.130979   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:40.130992   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:40.131011   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:40.210551   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:40.210579   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:40.210594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:39.514683   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.515289   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.515935   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.681167   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:42.683536   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.978995   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.979276   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.292853   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:40.292889   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:40.333337   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:40.333365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:40.383219   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:40.383254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:42.898275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:42.913287   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:42.913365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:42.948662   70480 cri.go:89] found id: ""
	I0729 11:50:42.948696   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.948709   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:42.948716   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:42.948768   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:42.989511   70480 cri.go:89] found id: ""
	I0729 11:50:42.989541   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.989551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:42.989558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:42.989609   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:43.025987   70480 cri.go:89] found id: ""
	I0729 11:50:43.026013   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.026021   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:43.026026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:43.026082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:43.062210   70480 cri.go:89] found id: ""
	I0729 11:50:43.062243   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.062253   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:43.062271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:43.062344   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:43.104967   70480 cri.go:89] found id: ""
	I0729 11:50:43.104990   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.104997   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:43.105003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:43.105081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:43.146433   70480 cri.go:89] found id: ""
	I0729 11:50:43.146467   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.146479   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:43.146487   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:43.146551   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:43.199618   70480 cri.go:89] found id: ""
	I0729 11:50:43.199647   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.199658   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:43.199665   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:43.199721   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:43.238994   70480 cri.go:89] found id: ""
	I0729 11:50:43.239025   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.239036   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:43.239053   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:43.239071   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:43.253185   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:43.253211   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:43.325381   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:43.325399   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:43.325410   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:43.408547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:43.408582   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:43.447251   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:43.447281   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:45.516120   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.015236   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:45.181461   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:47.682648   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.478782   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.479013   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.001731   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:46.017006   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:46.017084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:46.056451   70480 cri.go:89] found id: ""
	I0729 11:50:46.056480   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.056492   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:46.056500   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:46.056562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:46.094715   70480 cri.go:89] found id: ""
	I0729 11:50:46.094754   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.094762   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:46.094767   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:46.094817   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:46.131440   70480 cri.go:89] found id: ""
	I0729 11:50:46.131471   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.131483   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:46.131490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:46.131548   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:46.171239   70480 cri.go:89] found id: ""
	I0729 11:50:46.171264   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.171271   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:46.171278   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:46.171331   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:46.208060   70480 cri.go:89] found id: ""
	I0729 11:50:46.208094   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.208102   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:46.208108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:46.208162   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:46.244765   70480 cri.go:89] found id: ""
	I0729 11:50:46.244797   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.244806   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:46.244813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:46.244874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:46.281932   70480 cri.go:89] found id: ""
	I0729 11:50:46.281965   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.281977   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:46.281986   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:46.282058   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:46.319589   70480 cri.go:89] found id: ""
	I0729 11:50:46.319618   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.319629   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:46.319640   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:46.319655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:46.369821   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:46.369859   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:46.384828   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:46.384862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:46.460755   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:46.460780   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:46.460793   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:46.543424   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:46.543459   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.089661   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:49.103781   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:49.103878   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:49.140206   70480 cri.go:89] found id: ""
	I0729 11:50:49.140234   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.140242   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:49.140248   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:49.140306   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:49.181047   70480 cri.go:89] found id: ""
	I0729 11:50:49.181077   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.181087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:49.181094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:49.181160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:49.217106   70480 cri.go:89] found id: ""
	I0729 11:50:49.217135   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.217145   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:49.217152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:49.217213   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:49.256920   70480 cri.go:89] found id: ""
	I0729 11:50:49.256955   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.256966   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:49.256973   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:49.257040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:49.294586   70480 cri.go:89] found id: ""
	I0729 11:50:49.294610   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.294618   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:49.294623   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:49.294687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:49.332441   70480 cri.go:89] found id: ""
	I0729 11:50:49.332467   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.332475   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:49.332480   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:49.332538   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:49.370255   70480 cri.go:89] found id: ""
	I0729 11:50:49.370281   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.370288   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:49.370293   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:49.370348   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:49.407106   70480 cri.go:89] found id: ""
	I0729 11:50:49.407142   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.407150   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:49.407158   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:49.407170   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:49.487262   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:49.487294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.530381   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:49.530407   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:49.585601   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:49.585640   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:49.608868   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:49.608909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:49.717369   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:50.513962   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.514789   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.181505   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.681593   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.483654   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.978973   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:54.979504   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.218194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:52.232762   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:52.232830   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:52.269399   70480 cri.go:89] found id: ""
	I0729 11:50:52.269427   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.269435   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:52.269441   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:52.269488   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:52.304375   70480 cri.go:89] found id: ""
	I0729 11:50:52.304405   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.304415   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:52.304421   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:52.304471   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:52.339374   70480 cri.go:89] found id: ""
	I0729 11:50:52.339406   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.339423   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:52.339431   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:52.339490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:52.375675   70480 cri.go:89] found id: ""
	I0729 11:50:52.375704   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.375715   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:52.375724   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:52.375785   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:52.411587   70480 cri.go:89] found id: ""
	I0729 11:50:52.411612   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.411620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:52.411625   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:52.411677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:52.445267   70480 cri.go:89] found id: ""
	I0729 11:50:52.445291   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.445301   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:52.445308   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:52.445367   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:52.491320   70480 cri.go:89] found id: ""
	I0729 11:50:52.491352   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.491361   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:52.491376   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:52.491432   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:52.528158   70480 cri.go:89] found id: ""
	I0729 11:50:52.528195   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.528205   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:52.528214   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:52.528229   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:52.584122   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:52.584156   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:52.598572   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:52.598611   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:52.675433   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:52.675451   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:52.675473   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:52.759393   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:52.759433   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:55.014201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.015293   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.181456   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.680557   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:56.980460   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:58.982179   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.300231   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:55.316902   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:55.316972   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:55.362311   70480 cri.go:89] found id: ""
	I0729 11:50:55.362350   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.362360   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:55.362368   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:55.362434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:55.420481   70480 cri.go:89] found id: ""
	I0729 11:50:55.420506   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.420519   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:55.420524   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:55.420582   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:55.472515   70480 cri.go:89] found id: ""
	I0729 11:50:55.472546   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.472556   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:55.472565   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:55.472625   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:55.513203   70480 cri.go:89] found id: ""
	I0729 11:50:55.513224   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.513232   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:55.513237   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:55.513290   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:55.548410   70480 cri.go:89] found id: ""
	I0729 11:50:55.548440   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.548450   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:55.548457   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:55.548517   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:55.584532   70480 cri.go:89] found id: ""
	I0729 11:50:55.584561   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.584571   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:55.584577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:55.584640   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:55.627593   70480 cri.go:89] found id: ""
	I0729 11:50:55.627623   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.627652   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:55.627660   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:55.627723   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:55.667981   70480 cri.go:89] found id: ""
	I0729 11:50:55.668005   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.668014   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:55.668021   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:55.668050   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:55.721569   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:55.721605   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:55.735570   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:55.735598   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:55.810549   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:55.810578   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:55.810590   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:55.892547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:55.892594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:58.435946   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:58.449628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:58.449693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:58.487449   70480 cri.go:89] found id: ""
	I0729 11:50:58.487481   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.487499   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:58.487507   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:58.487574   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:58.524030   70480 cri.go:89] found id: ""
	I0729 11:50:58.524051   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.524058   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:58.524063   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:58.524118   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:58.560348   70480 cri.go:89] found id: ""
	I0729 11:50:58.560374   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.560381   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:58.560386   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:58.560434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:58.598947   70480 cri.go:89] found id: ""
	I0729 11:50:58.598974   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.598984   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:58.598992   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:58.599050   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:58.634763   70480 cri.go:89] found id: ""
	I0729 11:50:58.634789   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.634799   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:58.634807   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:58.634867   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:58.671609   70480 cri.go:89] found id: ""
	I0729 11:50:58.671639   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.671649   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:58.671656   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:58.671715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:58.712629   70480 cri.go:89] found id: ""
	I0729 11:50:58.712654   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.712661   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:58.712669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:58.712719   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:58.749749   70480 cri.go:89] found id: ""
	I0729 11:50:58.749779   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.749788   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:58.749799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:58.749813   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:58.807124   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:58.807159   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:58.821486   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:58.821513   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:58.889226   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:58.889248   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:58.889263   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:58.968593   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:58.968633   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:59.515675   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.015006   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:59.681443   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.181409   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:04.183067   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.482470   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:03.482794   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.511112   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:01.525124   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:01.525221   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:01.559415   70480 cri.go:89] found id: ""
	I0729 11:51:01.559450   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.559462   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:01.559469   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:01.559530   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:01.593614   70480 cri.go:89] found id: ""
	I0729 11:51:01.593644   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.593655   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:01.593661   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:01.593722   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:01.632365   70480 cri.go:89] found id: ""
	I0729 11:51:01.632398   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.632409   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:01.632416   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:01.632476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:01.670518   70480 cri.go:89] found id: ""
	I0729 11:51:01.670543   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.670550   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:01.670557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:01.670618   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:01.709732   70480 cri.go:89] found id: ""
	I0729 11:51:01.709755   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.709762   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:01.709768   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:01.709813   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:01.751710   70480 cri.go:89] found id: ""
	I0729 11:51:01.751739   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.751749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:01.751756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:01.751818   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:01.795819   70480 cri.go:89] found id: ""
	I0729 11:51:01.795848   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.795859   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:01.795867   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:01.795931   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:01.838654   70480 cri.go:89] found id: ""
	I0729 11:51:01.838684   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.838691   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:01.838719   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:01.838734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:01.894328   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:01.894370   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:01.908870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:01.908894   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:01.982740   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:01.982770   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:01.982785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:02.068332   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:02.068376   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.614758   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:04.629646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:04.629708   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:04.666502   70480 cri.go:89] found id: ""
	I0729 11:51:04.666526   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.666534   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:04.666540   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:04.666590   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:04.705373   70480 cri.go:89] found id: ""
	I0729 11:51:04.705398   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.705407   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:04.705414   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:04.705468   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:04.745076   70480 cri.go:89] found id: ""
	I0729 11:51:04.745110   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.745122   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:04.745130   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:04.745195   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:04.786923   70480 cri.go:89] found id: ""
	I0729 11:51:04.786953   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.786963   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:04.786971   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:04.787031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:04.824483   70480 cri.go:89] found id: ""
	I0729 11:51:04.824514   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.824522   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:04.824530   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:04.824591   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:04.861347   70480 cri.go:89] found id: ""
	I0729 11:51:04.861379   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.861390   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:04.861399   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:04.861456   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:04.902102   70480 cri.go:89] found id: ""
	I0729 11:51:04.902136   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.902143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:04.902149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:04.902206   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:04.942533   70480 cri.go:89] found id: ""
	I0729 11:51:04.942560   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.942568   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:04.942576   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:04.942589   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:04.997272   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:04.997309   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:05.012276   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:05.012307   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:05.088408   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:05.088434   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:05.088462   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:05.173414   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:05.173452   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.514092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.016150   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:06.680804   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:08.681656   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:05.978846   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.979974   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.718649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:07.732209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:07.732282   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:07.767128   70480 cri.go:89] found id: ""
	I0729 11:51:07.767157   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.767164   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:07.767170   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:07.767233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:07.803210   70480 cri.go:89] found id: ""
	I0729 11:51:07.803243   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.803253   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:07.803260   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:07.803323   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:07.839694   70480 cri.go:89] found id: ""
	I0729 11:51:07.839718   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.839726   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:07.839732   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:07.839779   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:07.876660   70480 cri.go:89] found id: ""
	I0729 11:51:07.876687   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.876695   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:07.876701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:07.876758   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:07.923089   70480 cri.go:89] found id: ""
	I0729 11:51:07.923119   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.923128   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:07.923139   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:07.923191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:07.961178   70480 cri.go:89] found id: ""
	I0729 11:51:07.961205   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.961214   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:07.961223   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:07.961283   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:08.001007   70480 cri.go:89] found id: ""
	I0729 11:51:08.001031   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.001038   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:08.001047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:08.001115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:08.036930   70480 cri.go:89] found id: ""
	I0729 11:51:08.036956   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.036964   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:08.036972   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:08.036982   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:08.091405   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:08.091440   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:08.106456   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:08.106483   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:08.181814   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:08.181835   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:08.181846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:08.267663   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:08.267701   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:09.514482   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.514970   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.182959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:13.680925   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.481614   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:12.482016   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:14.980848   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.814602   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:10.828290   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:10.828351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:10.866374   70480 cri.go:89] found id: ""
	I0729 11:51:10.866398   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.866406   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:10.866412   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:10.866457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:10.908253   70480 cri.go:89] found id: ""
	I0729 11:51:10.908286   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.908295   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:10.908301   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:10.908370   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:10.952686   70480 cri.go:89] found id: ""
	I0729 11:51:10.952709   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.952717   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:10.952723   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:10.952771   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:10.991621   70480 cri.go:89] found id: ""
	I0729 11:51:10.991650   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.991661   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:10.991668   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:10.991728   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:11.028420   70480 cri.go:89] found id: ""
	I0729 11:51:11.028451   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.028462   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:11.028469   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:11.028520   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:11.065213   70480 cri.go:89] found id: ""
	I0729 11:51:11.065248   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.065259   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:11.065266   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:11.065328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:11.104008   70480 cri.go:89] found id: ""
	I0729 11:51:11.104051   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.104064   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:11.104073   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:11.104134   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:11.140872   70480 cri.go:89] found id: ""
	I0729 11:51:11.140913   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.140925   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:11.140936   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:11.140958   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:11.222498   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:11.222535   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:11.265869   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:11.265909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:11.319889   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:11.319926   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:11.334069   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:11.334100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:11.412461   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:13.913194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:13.927057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:13.927141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:13.963267   70480 cri.go:89] found id: ""
	I0729 11:51:13.963302   70480 logs.go:276] 0 containers: []
	W0729 11:51:13.963312   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:13.963321   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:13.963386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:14.002591   70480 cri.go:89] found id: ""
	I0729 11:51:14.002621   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.002633   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:14.002640   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:14.002737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:14.041377   70480 cri.go:89] found id: ""
	I0729 11:51:14.041410   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.041422   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:14.041437   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:14.041502   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:14.081794   70480 cri.go:89] found id: ""
	I0729 11:51:14.081821   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.081829   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:14.081835   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:14.081888   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:14.121223   70480 cri.go:89] found id: ""
	I0729 11:51:14.121251   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.121261   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:14.121269   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:14.121333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:14.156762   70480 cri.go:89] found id: ""
	I0729 11:51:14.156798   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.156808   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:14.156817   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:14.156881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:14.195068   70480 cri.go:89] found id: ""
	I0729 11:51:14.195098   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.195108   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:14.195115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:14.195185   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:14.233361   70480 cri.go:89] found id: ""
	I0729 11:51:14.233392   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.233402   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:14.233413   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:14.233428   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:14.289276   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:14.289318   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:14.304505   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:14.304536   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:14.376648   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:14.376673   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:14.376685   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:14.453538   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:14.453574   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:14.016205   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:16.514374   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.514902   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:15.681382   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.181597   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.479865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:19.480304   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.004150   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:17.018324   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:17.018401   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:17.056923   70480 cri.go:89] found id: ""
	I0729 11:51:17.056959   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.056970   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:17.056978   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:17.057042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:17.093329   70480 cri.go:89] found id: ""
	I0729 11:51:17.093362   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.093374   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:17.093381   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:17.093443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:17.130340   70480 cri.go:89] found id: ""
	I0729 11:51:17.130372   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.130382   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:17.130391   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:17.130457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:17.164877   70480 cri.go:89] found id: ""
	I0729 11:51:17.164902   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.164910   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:17.164915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:17.164962   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:17.201509   70480 cri.go:89] found id: ""
	I0729 11:51:17.201538   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.201549   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:17.201555   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:17.201629   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:17.242097   70480 cri.go:89] found id: ""
	I0729 11:51:17.242121   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.242130   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:17.242136   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:17.242182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:17.277239   70480 cri.go:89] found id: ""
	I0729 11:51:17.277262   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.277270   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:17.277279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:17.277328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:17.310991   70480 cri.go:89] found id: ""
	I0729 11:51:17.311022   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.311034   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:17.311046   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:17.311061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:17.386672   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:17.386718   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:17.424880   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:17.424905   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:17.477226   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:17.477255   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:17.492327   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:17.492354   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:17.570971   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.071148   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:20.085734   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:20.085821   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:20.120455   70480 cri.go:89] found id: ""
	I0729 11:51:20.120500   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.120512   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:20.120530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:20.120589   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:20.165159   70480 cri.go:89] found id: ""
	I0729 11:51:20.165186   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.165193   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:20.165199   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:20.165245   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:20.202147   70480 cri.go:89] found id: ""
	I0729 11:51:20.202168   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.202175   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:20.202182   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:20.202237   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:20.515560   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.014288   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.681542   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.181158   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:21.978106   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.979809   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.237783   70480 cri.go:89] found id: ""
	I0729 11:51:20.237811   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.237822   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:20.237829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:20.237890   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:20.280812   70480 cri.go:89] found id: ""
	I0729 11:51:20.280839   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.280852   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:20.280858   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:20.280922   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:20.317904   70480 cri.go:89] found id: ""
	I0729 11:51:20.317925   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.317932   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:20.317938   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:20.317986   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:20.356106   70480 cri.go:89] found id: ""
	I0729 11:51:20.356136   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.356143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:20.356149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:20.356197   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:20.393483   70480 cri.go:89] found id: ""
	I0729 11:51:20.393514   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.393526   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:20.393537   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:20.393552   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:20.446650   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:20.446716   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:20.460502   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:20.460531   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:20.535717   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.535738   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:20.535751   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:20.619068   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:20.619119   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.159775   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:23.174688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:23.174776   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:23.210990   70480 cri.go:89] found id: ""
	I0729 11:51:23.211017   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.211025   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:23.211031   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:23.211083   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:23.251468   70480 cri.go:89] found id: ""
	I0729 11:51:23.251494   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.251505   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:23.251512   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:23.251567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:23.288902   70480 cri.go:89] found id: ""
	I0729 11:51:23.288950   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.288961   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:23.288969   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:23.289028   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:23.324544   70480 cri.go:89] found id: ""
	I0729 11:51:23.324583   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.324593   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:23.324604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:23.324681   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:23.363138   70480 cri.go:89] found id: ""
	I0729 11:51:23.363170   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.363180   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:23.363188   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:23.363246   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:23.400107   70480 cri.go:89] found id: ""
	I0729 11:51:23.400136   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.400146   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:23.400163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:23.400224   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:23.437129   70480 cri.go:89] found id: ""
	I0729 11:51:23.437169   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.437180   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:23.437189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:23.437251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:23.470782   70480 cri.go:89] found id: ""
	I0729 11:51:23.470811   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.470821   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:23.470831   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:23.470846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:23.557806   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:23.557843   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.601312   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:23.601342   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:23.658042   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:23.658084   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:23.682844   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:23.682878   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:23.773049   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:25.015099   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.518243   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:25.680468   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.680741   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.479529   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:28.978442   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.273263   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:26.286759   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:26.286822   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:26.325303   70480 cri.go:89] found id: ""
	I0729 11:51:26.325330   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.325340   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:26.325347   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:26.325402   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:26.366994   70480 cri.go:89] found id: ""
	I0729 11:51:26.367019   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.367033   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:26.367040   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:26.367100   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:26.407748   70480 cri.go:89] found id: ""
	I0729 11:51:26.407779   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.407789   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:26.407796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:26.407856   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:26.443172   70480 cri.go:89] found id: ""
	I0729 11:51:26.443197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.443206   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:26.443214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:26.443275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:26.479906   70480 cri.go:89] found id: ""
	I0729 11:51:26.479928   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.479937   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:26.479945   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:26.480011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:26.518837   70480 cri.go:89] found id: ""
	I0729 11:51:26.518867   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.518877   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:26.518884   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:26.518939   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:26.557168   70480 cri.go:89] found id: ""
	I0729 11:51:26.557197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.557207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:26.557214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:26.557271   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:26.591670   70480 cri.go:89] found id: ""
	I0729 11:51:26.591699   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.591707   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:26.591715   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:26.591727   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:26.606611   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:26.606641   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:26.675726   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:26.675752   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:26.675768   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:26.755738   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:26.755776   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:26.799482   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:26.799518   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:29.353415   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:29.367062   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:29.367141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:29.404628   70480 cri.go:89] found id: ""
	I0729 11:51:29.404669   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.404677   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:29.404683   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:29.404731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:29.452828   70480 cri.go:89] found id: ""
	I0729 11:51:29.452858   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.452868   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:29.452877   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:29.452936   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:29.488252   70480 cri.go:89] found id: ""
	I0729 11:51:29.488280   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.488288   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:29.488296   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:29.488357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:29.524837   70480 cri.go:89] found id: ""
	I0729 11:51:29.524863   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.524874   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:29.524890   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:29.524938   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:29.564545   70480 cri.go:89] found id: ""
	I0729 11:51:29.564587   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.564598   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:29.564615   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:29.564686   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:29.604254   70480 cri.go:89] found id: ""
	I0729 11:51:29.604282   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.604292   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:29.604299   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:29.604365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:29.640107   70480 cri.go:89] found id: ""
	I0729 11:51:29.640137   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.640147   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:29.640152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:29.640207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:29.677070   70480 cri.go:89] found id: ""
	I0729 11:51:29.677099   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.677109   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:29.677119   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:29.677143   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:29.692434   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:29.692466   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:29.769317   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:29.769344   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:29.769355   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:29.850468   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:29.850505   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:29.890975   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:29.891010   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:30.014896   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.014991   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:29.682442   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.181766   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:34.182032   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:30.979636   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:33.480377   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.445320   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:32.459458   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:32.459534   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:32.500629   70480 cri.go:89] found id: ""
	I0729 11:51:32.500652   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.500659   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:32.500664   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:32.500711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:32.538737   70480 cri.go:89] found id: ""
	I0729 11:51:32.538763   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.538771   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:32.538777   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:32.538837   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:32.578093   70480 cri.go:89] found id: ""
	I0729 11:51:32.578124   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.578134   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:32.578141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:32.578200   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:32.615501   70480 cri.go:89] found id: ""
	I0729 11:51:32.615527   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.615539   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:32.615547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:32.615597   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:32.653232   70480 cri.go:89] found id: ""
	I0729 11:51:32.653262   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.653272   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:32.653279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:32.653338   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:32.691385   70480 cri.go:89] found id: ""
	I0729 11:51:32.691408   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.691418   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:32.691440   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:32.691486   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:32.727625   70480 cri.go:89] found id: ""
	I0729 11:51:32.727657   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.727667   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:32.727674   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:32.727737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:32.762200   70480 cri.go:89] found id: ""
	I0729 11:51:32.762225   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.762232   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:32.762240   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:32.762251   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:32.815020   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:32.815061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:32.830072   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:32.830100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:32.902248   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:32.902278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:32.902292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:32.984571   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:32.984604   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:34.513960   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.514684   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.515512   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.680403   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.681176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.979834   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.482035   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.528850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:35.543568   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:35.543633   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:35.583537   70480 cri.go:89] found id: ""
	I0729 11:51:35.583564   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.583571   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:35.583578   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:35.583632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:35.621997   70480 cri.go:89] found id: ""
	I0729 11:51:35.622021   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.622028   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:35.622034   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:35.622090   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:35.659062   70480 cri.go:89] found id: ""
	I0729 11:51:35.659091   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.659102   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:35.659109   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:35.659169   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:35.696644   70480 cri.go:89] found id: ""
	I0729 11:51:35.696679   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.696689   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:35.696701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:35.696753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:35.734321   70480 cri.go:89] found id: ""
	I0729 11:51:35.734348   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.734358   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:35.734366   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:35.734426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:35.771540   70480 cri.go:89] found id: ""
	I0729 11:51:35.771574   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.771586   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:35.771604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:35.771684   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:35.811293   70480 cri.go:89] found id: ""
	I0729 11:51:35.811318   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.811326   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:35.811332   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:35.811386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:35.857175   70480 cri.go:89] found id: ""
	I0729 11:51:35.857206   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.857217   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:35.857228   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:35.857242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:35.946191   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:35.946210   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:35.946225   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:36.033466   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:36.033501   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:36.072593   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:36.072622   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:36.129193   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:36.129228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:38.645464   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:38.659318   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:38.659378   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:38.697259   70480 cri.go:89] found id: ""
	I0729 11:51:38.697289   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.697298   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:38.697304   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:38.697351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:38.736722   70480 cri.go:89] found id: ""
	I0729 11:51:38.736751   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.736763   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:38.736770   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:38.736828   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:38.773508   70480 cri.go:89] found id: ""
	I0729 11:51:38.773532   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.773539   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:38.773545   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:38.773607   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:38.813147   70480 cri.go:89] found id: ""
	I0729 11:51:38.813177   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.813186   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:38.813193   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:38.813249   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:38.850588   70480 cri.go:89] found id: ""
	I0729 11:51:38.850621   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.850631   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:38.850639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:38.850694   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:38.888260   70480 cri.go:89] found id: ""
	I0729 11:51:38.888293   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.888304   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:38.888313   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:38.888380   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:38.924326   70480 cri.go:89] found id: ""
	I0729 11:51:38.924352   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.924360   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:38.924365   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:38.924426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:38.963356   70480 cri.go:89] found id: ""
	I0729 11:51:38.963386   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.963397   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:38.963408   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:38.963425   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:39.048438   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:39.048472   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:39.087799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:39.087828   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:39.141908   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:39.141945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:39.156242   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:39.156282   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:39.231689   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:41.014799   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.015914   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.180241   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.180737   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:40.980126   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.480593   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.732860   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:41.752371   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:41.752451   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:41.804525   70480 cri.go:89] found id: ""
	I0729 11:51:41.804553   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.804565   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:41.804575   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:41.804632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:41.859983   70480 cri.go:89] found id: ""
	I0729 11:51:41.860010   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.860018   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:41.860024   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:41.860081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:41.909592   70480 cri.go:89] found id: ""
	I0729 11:51:41.909622   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.909632   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:41.909639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:41.909700   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:41.949892   70480 cri.go:89] found id: ""
	I0729 11:51:41.949919   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.949928   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:41.949933   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:41.950011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:41.985334   70480 cri.go:89] found id: ""
	I0729 11:51:41.985360   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.985368   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:41.985374   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:41.985426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:42.028747   70480 cri.go:89] found id: ""
	I0729 11:51:42.028806   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.028818   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:42.028829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:42.028899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:42.063927   70480 cri.go:89] found id: ""
	I0729 11:51:42.063955   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.063965   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:42.063972   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:42.064031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:42.099539   70480 cri.go:89] found id: ""
	I0729 11:51:42.099566   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.099577   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:42.099587   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:42.099602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:42.112852   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:42.112879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:42.185104   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:42.185130   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:42.185142   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:42.265744   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:42.265778   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:42.309451   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:42.309478   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.862271   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:44.876763   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:44.876832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:44.921157   70480 cri.go:89] found id: ""
	I0729 11:51:44.921188   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.921198   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:44.921206   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:44.921266   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:44.960222   70480 cri.go:89] found id: ""
	I0729 11:51:44.960255   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.960265   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:44.960272   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:44.960334   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:44.999994   70480 cri.go:89] found id: ""
	I0729 11:51:45.000025   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.000036   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:45.000045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:45.000108   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:45.037110   70480 cri.go:89] found id: ""
	I0729 11:51:45.037145   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.037156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:45.037163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:45.037215   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:45.074406   70480 cri.go:89] found id: ""
	I0729 11:51:45.074430   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.074440   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:45.074447   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:45.074508   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:45.113069   70480 cri.go:89] found id: ""
	I0729 11:51:45.113097   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.113107   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:45.113115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:45.113178   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:45.158016   70480 cri.go:89] found id: ""
	I0729 11:51:45.158045   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.158055   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:45.158063   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:45.158115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:45.199257   70480 cri.go:89] found id: ""
	I0729 11:51:45.199286   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.199294   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:45.199303   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:45.199314   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.509117   69907 pod_ready.go:81] duration metric: took 4m0.000903528s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	E0729 11:51:44.509148   69907 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:51:44.509164   69907 pod_ready.go:38] duration metric: took 4m6.540840543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:51:44.509191   69907 kubeadm.go:597] duration metric: took 4m16.180899614s to restartPrimaryControlPlane
	W0729 11:51:44.509250   69907 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:51:44.509278   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:51:45.181697   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.682106   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.979275   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.979316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.254060   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:45.254100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:45.269592   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:45.269630   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:45.357469   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:45.357493   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:45.357508   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:45.437760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:45.437806   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:47.980407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:47.998789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:47.998874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:48.039278   70480 cri.go:89] found id: ""
	I0729 11:51:48.039311   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.039321   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:48.039328   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:48.039391   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:48.080277   70480 cri.go:89] found id: ""
	I0729 11:51:48.080312   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.080324   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:48.080332   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:48.080395   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:48.119009   70480 cri.go:89] found id: ""
	I0729 11:51:48.119032   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.119039   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:48.119045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:48.119091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:48.162062   70480 cri.go:89] found id: ""
	I0729 11:51:48.162091   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.162101   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:48.162108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:48.162175   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:48.208089   70480 cri.go:89] found id: ""
	I0729 11:51:48.208120   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.208141   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:48.208148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:48.208214   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:48.254174   70480 cri.go:89] found id: ""
	I0729 11:51:48.254206   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.254217   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:48.254225   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:48.254288   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:48.294959   70480 cri.go:89] found id: ""
	I0729 11:51:48.294988   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.294998   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:48.295005   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:48.295067   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:48.337176   70480 cri.go:89] found id: ""
	I0729 11:51:48.337209   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.337221   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:48.337231   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:48.337249   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:48.392826   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:48.392879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:48.409017   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:48.409043   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:48.489964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:48.489995   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:48.490012   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:48.571448   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:48.571496   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:50.180914   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.181136   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:50.479880   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.977753   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:54.978456   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:51.124524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:51.137808   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:51.137887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:51.178622   70480 cri.go:89] found id: ""
	I0729 11:51:51.178647   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.178656   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:51.178663   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:51.178738   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:51.214985   70480 cri.go:89] found id: ""
	I0729 11:51:51.215008   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.215015   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:51.215021   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:51.215071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:51.250529   70480 cri.go:89] found id: ""
	I0729 11:51:51.250575   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.250586   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:51.250594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:51.250648   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:51.284745   70480 cri.go:89] found id: ""
	I0729 11:51:51.284774   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.284781   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:51.284787   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:51.284844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:51.319448   70480 cri.go:89] found id: ""
	I0729 11:51:51.319476   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.319486   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:51.319494   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:51.319559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:51.358828   70480 cri.go:89] found id: ""
	I0729 11:51:51.358861   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.358868   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:51.358875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:51.358934   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:51.398326   70480 cri.go:89] found id: ""
	I0729 11:51:51.398356   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.398363   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:51.398369   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:51.398424   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:51.450495   70480 cri.go:89] found id: ""
	I0729 11:51:51.450523   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.450530   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:51.450539   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:51.450549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:51.495351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:51.495381   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:51.545937   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:51.545972   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:51.560738   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:51.560769   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:51.640187   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:51.640209   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:51.640223   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.230801   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:54.244231   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:54.244307   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:54.279319   70480 cri.go:89] found id: ""
	I0729 11:51:54.279349   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.279359   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:54.279366   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:54.279426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:54.316648   70480 cri.go:89] found id: ""
	I0729 11:51:54.316675   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.316685   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:54.316691   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:54.316751   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:54.353605   70480 cri.go:89] found id: ""
	I0729 11:51:54.353631   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.353641   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:54.353646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:54.353705   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:54.389695   70480 cri.go:89] found id: ""
	I0729 11:51:54.389724   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.389734   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:54.389739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:54.389789   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:54.425572   70480 cri.go:89] found id: ""
	I0729 11:51:54.425610   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.425620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:54.425627   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:54.425693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:54.466041   70480 cri.go:89] found id: ""
	I0729 11:51:54.466084   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.466101   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:54.466111   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:54.466160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:54.504916   70480 cri.go:89] found id: ""
	I0729 11:51:54.504943   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.504950   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:54.504956   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:54.505003   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:54.542093   70480 cri.go:89] found id: ""
	I0729 11:51:54.542133   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.542141   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:54.542149   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:54.542161   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:54.555521   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:54.555549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:54.639080   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:54.639104   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:54.639115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.721817   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:54.721858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:54.760794   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:54.760831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:54.681184   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.179812   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.180919   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:56.978928   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.479018   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.314549   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:57.328865   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:57.328941   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:57.364546   70480 cri.go:89] found id: ""
	I0729 11:51:57.364577   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.364587   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:57.364594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:57.364665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:57.401965   70480 cri.go:89] found id: ""
	I0729 11:51:57.401994   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.402005   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:57.402013   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:57.402072   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:57.436918   70480 cri.go:89] found id: ""
	I0729 11:51:57.436942   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.436975   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:57.436983   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:57.437042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:57.471476   70480 cri.go:89] found id: ""
	I0729 11:51:57.471503   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.471511   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:57.471519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:57.471576   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:57.509934   70480 cri.go:89] found id: ""
	I0729 11:51:57.509962   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.509972   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:57.509980   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:57.510038   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:57.552095   70480 cri.go:89] found id: ""
	I0729 11:51:57.552123   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.552133   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:57.552141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:57.552204   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:57.586486   70480 cri.go:89] found id: ""
	I0729 11:51:57.586507   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.586514   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:57.586519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:57.586580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:57.622708   70480 cri.go:89] found id: ""
	I0729 11:51:57.622737   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.622746   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:57.622757   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:57.622771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:57.637102   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:57.637133   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:57.710960   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:57.710981   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:57.710994   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:57.803522   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:57.803559   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:57.845804   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:57.845838   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:01.680142   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.682844   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:01.978739   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.973441   70231 pod_ready.go:81] duration metric: took 4m0.000922355s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:03.973469   70231 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:03.973488   70231 pod_ready.go:38] duration metric: took 4m6.983171556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:03.973523   70231 kubeadm.go:597] duration metric: took 4m14.830269847s to restartPrimaryControlPlane
	W0729 11:52:03.973614   70231 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:03.973646   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:00.398227   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:00.412064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:00.412139   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:00.446661   70480 cri.go:89] found id: ""
	I0729 11:52:00.446716   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.446729   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:00.446741   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:00.446793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:00.482234   70480 cri.go:89] found id: ""
	I0729 11:52:00.482260   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.482270   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:00.482290   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:00.482357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:00.520087   70480 cri.go:89] found id: ""
	I0729 11:52:00.520125   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.520136   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:00.520143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:00.520203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:00.556889   70480 cri.go:89] found id: ""
	I0729 11:52:00.556913   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.556924   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:00.556931   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:00.556996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:00.593521   70480 cri.go:89] found id: ""
	I0729 11:52:00.593559   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.593569   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:00.593577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:00.593644   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:00.631849   70480 cri.go:89] found id: ""
	I0729 11:52:00.631879   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.631889   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:00.631897   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:00.631956   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:00.669206   70480 cri.go:89] found id: ""
	I0729 11:52:00.669235   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.669246   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:00.669254   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:00.669314   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:00.707653   70480 cri.go:89] found id: ""
	I0729 11:52:00.707681   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.707692   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:00.707702   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:00.707720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:00.722275   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:00.722305   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:00.796212   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:00.796240   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:00.796254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:00.882088   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:00.882135   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:00.924217   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:00.924248   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.479929   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:03.493148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:03.493211   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:03.528114   70480 cri.go:89] found id: ""
	I0729 11:52:03.528158   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.528169   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:03.528177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:03.528233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:03.564509   70480 cri.go:89] found id: ""
	I0729 11:52:03.564542   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.564552   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:03.564559   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:03.564628   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:03.599850   70480 cri.go:89] found id: ""
	I0729 11:52:03.599884   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.599897   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:03.599913   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:03.599977   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:03.638988   70480 cri.go:89] found id: ""
	I0729 11:52:03.639020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.639031   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:03.639039   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:03.639112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:03.678544   70480 cri.go:89] found id: ""
	I0729 11:52:03.678572   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.678581   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:03.678589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:03.678651   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:03.717265   70480 cri.go:89] found id: ""
	I0729 11:52:03.717297   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.717307   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:03.717314   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:03.717379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:03.756479   70480 cri.go:89] found id: ""
	I0729 11:52:03.756504   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.756512   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:03.756519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:03.756570   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:03.791992   70480 cri.go:89] found id: ""
	I0729 11:52:03.792020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.792031   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:03.792042   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:03.792056   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:03.881378   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:03.881417   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:03.918866   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:03.918902   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.972005   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:03.972041   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:03.989650   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:03.989680   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:04.069522   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:06.182277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:08.681543   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:06.570567   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:06.588524   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:06.588594   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:06.627028   70480 cri.go:89] found id: ""
	I0729 11:52:06.627071   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.627082   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:06.627089   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:06.627154   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:06.669504   70480 cri.go:89] found id: ""
	I0729 11:52:06.669536   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.669548   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:06.669558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:06.669620   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:06.709546   70480 cri.go:89] found id: ""
	I0729 11:52:06.709573   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.709593   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:06.709603   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:06.709661   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:06.745539   70480 cri.go:89] found id: ""
	I0729 11:52:06.745568   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.745577   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:06.745585   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:06.745645   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:06.783924   70480 cri.go:89] found id: ""
	I0729 11:52:06.783960   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.783971   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:06.783978   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:06.784040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:06.819838   70480 cri.go:89] found id: ""
	I0729 11:52:06.819869   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.819879   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:06.819886   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:06.819950   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:06.858057   70480 cri.go:89] found id: ""
	I0729 11:52:06.858085   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.858102   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:06.858110   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:06.858186   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:06.896844   70480 cri.go:89] found id: ""
	I0729 11:52:06.896875   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.896885   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:06.896896   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:06.896911   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:06.953126   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:06.953166   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:06.967548   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:06.967579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:07.046716   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:07.046740   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:07.046756   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:07.129264   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:07.129299   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:09.672314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:09.687133   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:09.687202   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:09.728280   70480 cri.go:89] found id: ""
	I0729 11:52:09.728307   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.728316   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:09.728322   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:09.728379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:09.765178   70480 cri.go:89] found id: ""
	I0729 11:52:09.765214   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.765225   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:09.765233   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:09.765293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:09.801181   70480 cri.go:89] found id: ""
	I0729 11:52:09.801216   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.801225   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:09.801233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:09.801294   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:09.845088   70480 cri.go:89] found id: ""
	I0729 11:52:09.845118   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.845129   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:09.845137   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:09.845198   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:09.883874   70480 cri.go:89] found id: ""
	I0729 11:52:09.883907   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.883918   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:09.883925   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:09.883992   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:09.918273   70480 cri.go:89] found id: ""
	I0729 11:52:09.918302   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.918312   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:09.918321   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:09.918386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:09.978461   70480 cri.go:89] found id: ""
	I0729 11:52:09.978487   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.978494   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:09.978500   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:09.978546   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:10.022219   70480 cri.go:89] found id: ""
	I0729 11:52:10.022247   70480 logs.go:276] 0 containers: []
	W0729 11:52:10.022255   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:10.022264   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:10.022274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:10.076181   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:10.076228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:10.090567   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:10.090600   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:10.159576   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:10.159605   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:10.159620   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:11.181276   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:13.181424   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:10.242116   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:10.242165   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:12.784286   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:12.798015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:12.798111   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:12.834920   70480 cri.go:89] found id: ""
	I0729 11:52:12.834951   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.834962   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:12.834969   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:12.835030   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:12.876545   70480 cri.go:89] found id: ""
	I0729 11:52:12.876578   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.876589   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:12.876596   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:12.876665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:12.912912   70480 cri.go:89] found id: ""
	I0729 11:52:12.912937   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.912944   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:12.912950   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:12.913006   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:12.948118   70480 cri.go:89] found id: ""
	I0729 11:52:12.948148   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.948156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:12.948161   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:12.948207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:12.984339   70480 cri.go:89] found id: ""
	I0729 11:52:12.984364   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.984371   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:12.984377   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:12.984433   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:13.024870   70480 cri.go:89] found id: ""
	I0729 11:52:13.024906   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.024916   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:13.024924   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:13.024989   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:13.067951   70480 cri.go:89] found id: ""
	I0729 11:52:13.067988   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.067999   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:13.068007   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:13.068068   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:13.105095   70480 cri.go:89] found id: ""
	I0729 11:52:13.105126   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.105136   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:13.105144   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:13.105158   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:13.167486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:13.167537   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:13.183020   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:13.183046   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:13.264702   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:13.264734   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:13.264750   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:13.346760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:13.346795   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:15.887849   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:15.901560   70480 kubeadm.go:597] duration metric: took 4m4.683363388s to restartPrimaryControlPlane
	W0729 11:52:15.901628   70480 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:15.901659   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:16.377916   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.396247   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.408224   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.420998   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.421024   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.421073   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.432646   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.432712   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.444442   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.455098   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.455175   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.469621   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.482797   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.482869   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.495535   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.508873   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.508967   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.519797   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.590877   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:52:16.590933   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.768006   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.768167   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.768313   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:16.980586   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:16.523230   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.013927797s)
	I0729 11:52:16.523296   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.541674   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.553585   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.565171   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.565196   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.565237   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.575919   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.576023   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.588641   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.599947   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.600028   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.612623   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.624420   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.624486   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.639271   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.649979   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.650057   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.661423   69907 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.718013   69907 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:16.718138   69907 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.870793   69907 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.870955   69907 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.871090   69907 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:17.100094   69907 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:17.101792   69907 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:17.101895   69907 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:17.101999   69907 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:17.102129   69907 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:17.102237   69907 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:17.102339   69907 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:17.102419   69907 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:17.102523   69907 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:17.102607   69907 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:17.102731   69907 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:17.103613   69907 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:17.103841   69907 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:17.103923   69907 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.439592   69907 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.517503   69907 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:17.731672   69907 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.877789   69907 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.930274   69907 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.930777   69907 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:17.933362   69907 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:16.982617   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:16.982732   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:16.982826   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:16.982935   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:16.983015   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:16.983079   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:16.983127   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:16.983180   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:16.983230   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:16.983291   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:16.983354   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:16.983386   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:16.983433   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.523710   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.622636   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.732508   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.869921   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.892581   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.893213   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.893300   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.049043   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:17.935629   69907 out.go:204]   - Booting up control plane ...
	I0729 11:52:17.935753   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:17.935870   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:17.935955   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:17.961756   69907 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.962814   69907 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.962879   69907 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.102662   69907 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:18.102806   69907 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:15.181970   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:17.682108   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:18.051012   70480 out.go:204]   - Booting up control plane ...
	I0729 11:52:18.051192   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:18.058194   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:18.062340   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:18.063509   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:18.066121   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:52:19.116356   69907 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.010567801s
	I0729 11:52:19.116461   69907 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:24.118059   69907 kubeadm.go:310] [api-check] The API server is healthy after 5.002510977s
	I0729 11:52:24.132586   69907 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:24.148251   69907 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:24.188769   69907 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:24.188956   69907 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-731235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:24.205790   69907 kubeadm.go:310] [bootstrap-token] Using token: pvm7ux.41geojc66jibd993
	I0729 11:52:20.181703   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:22.181889   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.182317   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.207334   69907 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:24.207519   69907 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:24.213637   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:24.226771   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:24.231379   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:24.239349   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:24.248803   69907 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:24.524966   69907 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:24.961557   69907 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:25.522876   69907 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:25.523985   69907 kubeadm.go:310] 
	I0729 11:52:25.524083   69907 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:25.524093   69907 kubeadm.go:310] 
	I0729 11:52:25.524203   69907 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:25.524234   69907 kubeadm.go:310] 
	I0729 11:52:25.524273   69907 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:25.524353   69907 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:25.524441   69907 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:25.524460   69907 kubeadm.go:310] 
	I0729 11:52:25.524520   69907 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:25.524527   69907 kubeadm.go:310] 
	I0729 11:52:25.524578   69907 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:25.524584   69907 kubeadm.go:310] 
	I0729 11:52:25.524625   69907 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:25.524728   69907 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:25.524834   69907 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:25.524843   69907 kubeadm.go:310] 
	I0729 11:52:25.524957   69907 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:25.525047   69907 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:25.525054   69907 kubeadm.go:310] 
	I0729 11:52:25.525175   69907 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525314   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:25.525343   69907 kubeadm.go:310] 	--control-plane 
	I0729 11:52:25.525351   69907 kubeadm.go:310] 
	I0729 11:52:25.525449   69907 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:25.525463   69907 kubeadm.go:310] 
	I0729 11:52:25.525569   69907 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525709   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:25.526283   69907 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:25.526361   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:52:25.526378   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:25.528362   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:25.529726   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:25.546760   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:25.571336   69907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:25.571457   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-731235 minikube.k8s.io/updated_at=2024_07_29T11_52_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=embed-certs-731235 minikube.k8s.io/primary=true
	I0729 11:52:25.571460   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:25.600643   69907 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:25.771231   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.271938   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.771337   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.271880   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.772276   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.271327   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.771854   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.680959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.180277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.271904   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:29.771958   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.271342   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.771316   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.271539   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.771490   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.271537   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.771969   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.271498   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.771963   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.681002   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.180450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.271709   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:34.771968   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.271985   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.771798   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.271877   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.771950   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.271225   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.771622   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.271354   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.369678   69907 kubeadm.go:1113] duration metric: took 12.798280829s to wait for elevateKubeSystemPrivileges
	I0729 11:52:38.369716   69907 kubeadm.go:394] duration metric: took 5m10.090728575s to StartCluster
	I0729 11:52:38.369737   69907 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.369812   69907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:38.371527   69907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.371774   69907 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:38.371829   69907 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:38.371904   69907 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-731235"
	I0729 11:52:38.371925   69907 addons.go:69] Setting default-storageclass=true in profile "embed-certs-731235"
	I0729 11:52:38.371956   69907 addons.go:69] Setting metrics-server=true in profile "embed-certs-731235"
	I0729 11:52:38.371977   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:38.371991   69907 addons.go:234] Setting addon metrics-server=true in "embed-certs-731235"
	W0729 11:52:38.371999   69907 addons.go:243] addon metrics-server should already be in state true
	I0729 11:52:38.372041   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.371966   69907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-731235"
	I0729 11:52:38.371936   69907 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-731235"
	W0729 11:52:38.372204   69907 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:38.372240   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.372365   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372402   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372460   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372615   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372662   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.373455   69907 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:38.374757   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:38.388333   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0729 11:52:38.388901   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.389443   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.389467   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.389661   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0729 11:52:38.389858   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.390469   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.390499   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.390717   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.391258   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.391278   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.391622   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0729 11:52:38.391655   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.391937   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.391966   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.392511   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.392538   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.392904   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.393459   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.393491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.395933   69907 addons.go:234] Setting addon default-storageclass=true in "embed-certs-731235"
	W0729 11:52:38.395953   69907 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:38.395980   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.396342   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.396371   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.411784   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0729 11:52:38.412254   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.412549   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34211
	I0729 11:52:38.412811   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.412831   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.412911   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.413173   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413340   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.413470   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.413488   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.413830   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413997   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.414897   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0729 11:52:38.415312   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.415395   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.415753   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.415772   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.415918   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.416126   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.416663   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.416690   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.418043   69907 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:38.418047   69907 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:38.419620   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:38.419640   69907 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:38.419661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.419693   69907 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:38.419702   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:38.419714   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.423646   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424115   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424184   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424208   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424370   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.424573   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.424631   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424647   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424722   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.424821   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.425101   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.425266   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.425394   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.425528   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.432777   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0729 11:52:38.433219   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.433735   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.433759   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.434121   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.434299   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.435957   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.436176   69907 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.436195   69907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:38.436216   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.438989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439431   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.439508   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439627   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.439783   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.439929   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.440077   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.598513   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:38.627199   69907 node_ready.go:35] waiting up to 6m0s for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639168   69907 node_ready.go:49] node "embed-certs-731235" has status "Ready":"True"
	I0729 11:52:38.639199   69907 node_ready.go:38] duration metric: took 11.953793ms for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639208   69907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:38.644562   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:38.678019   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:38.678042   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:38.706214   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:52:38.706247   69907 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:52:38.745796   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.745824   69907 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:38.767879   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.778016   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.790742   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:36.181329   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:38.183254   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:39.974095   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196041477s)
	I0729 11:52:39.974096   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206172307s)
	I0729 11:52:39.974194   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974247   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974203   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974345   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974811   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974831   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974840   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974847   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974857   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.974925   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974938   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974946   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974955   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.975075   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.975165   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.975244   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976561   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.976579   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976577   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.976589   69907 addons.go:475] Verifying addon metrics-server=true in "embed-certs-731235"
	I0729 11:52:39.999773   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.999799   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.000097   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.000118   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.026995   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.236214166s)
	I0729 11:52:40.027052   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027063   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027383   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.027402   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.027412   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027422   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027387   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029105   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.029109   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029124   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.031066   69907 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner
	I0729 11:52:36.127977   70231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.15430735s)
	I0729 11:52:36.128057   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:36.147540   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:36.159519   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:36.171332   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:36.171353   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:36.171406   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:52:36.182915   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:36.183084   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:36.193912   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:52:36.203972   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:36.204036   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:36.213886   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.223205   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:36.223260   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.235379   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:52:36.245392   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:36.245461   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:36.255495   70231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:36.468759   70231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:40.032797   69907 addons.go:510] duration metric: took 1.660964221s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner]
	I0729 11:52:40.654126   69907 pod_ready.go:102] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:41.173676   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.173708   69907 pod_ready.go:81] duration metric: took 2.529122203s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.173721   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183179   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.183207   69907 pod_ready.go:81] duration metric: took 9.478224ms for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183220   69907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192149   69907 pod_ready.go:92] pod "etcd-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.192177   69907 pod_ready.go:81] duration metric: took 8.949045ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192189   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199322   69907 pod_ready.go:92] pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.199347   69907 pod_ready.go:81] duration metric: took 7.150124ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199360   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210464   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.210491   69907 pod_ready.go:81] duration metric: took 11.123649ms for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210504   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549786   69907 pod_ready.go:92] pod "kube-proxy-ch48n" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.549814   69907 pod_ready.go:81] duration metric: took 339.30332ms for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549828   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949607   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.949629   69907 pod_ready.go:81] duration metric: took 399.794484ms for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949637   69907 pod_ready.go:38] duration metric: took 3.310420523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:41.949650   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:52:41.949732   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:41.967899   69907 api_server.go:72] duration metric: took 3.596093405s to wait for apiserver process to appear ...
	I0729 11:52:41.967933   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:52:41.967957   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:52:41.973064   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:52:41.974128   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:52:41.974151   69907 api_server.go:131] duration metric: took 6.211514ms to wait for apiserver health ...
	I0729 11:52:41.974158   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:52:42.152607   69907 system_pods.go:59] 9 kube-system pods found
	I0729 11:52:42.152648   69907 system_pods.go:61] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.152656   69907 system_pods.go:61] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.152663   69907 system_pods.go:61] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.152670   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.152674   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.152680   69907 system_pods.go:61] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.152685   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.152694   69907 system_pods.go:61] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.152702   69907 system_pods.go:61] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.152714   69907 system_pods.go:74] duration metric: took 178.548453ms to wait for pod list to return data ...
	I0729 11:52:42.152728   69907 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:52:42.349148   69907 default_sa.go:45] found service account: "default"
	I0729 11:52:42.349182   69907 default_sa.go:55] duration metric: took 196.446704ms for default service account to be created ...
	I0729 11:52:42.349192   69907 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:52:42.552384   69907 system_pods.go:86] 9 kube-system pods found
	I0729 11:52:42.552416   69907 system_pods.go:89] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.552425   69907 system_pods.go:89] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.552431   69907 system_pods.go:89] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.552437   69907 system_pods.go:89] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.552442   69907 system_pods.go:89] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.552448   69907 system_pods.go:89] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.552453   69907 system_pods.go:89] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.552462   69907 system_pods.go:89] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.552472   69907 system_pods.go:89] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.552483   69907 system_pods.go:126] duration metric: took 203.284903ms to wait for k8s-apps to be running ...
	I0729 11:52:42.552492   69907 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:52:42.552546   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:42.569158   69907 system_svc.go:56] duration metric: took 16.657226ms WaitForService to wait for kubelet
	I0729 11:52:42.569186   69907 kubeadm.go:582] duration metric: took 4.19738713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:52:42.569205   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:52:42.749356   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:52:42.749385   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:52:42.749399   69907 node_conditions.go:105] duration metric: took 180.189313ms to run NodePressure ...
	I0729 11:52:42.749411   69907 start.go:241] waiting for startup goroutines ...
	I0729 11:52:42.749417   69907 start.go:246] waiting for cluster config update ...
	I0729 11:52:42.749427   69907 start.go:255] writing updated cluster config ...
	I0729 11:52:42.749672   69907 ssh_runner.go:195] Run: rm -f paused
	I0729 11:52:42.807579   69907 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:52:42.809609   69907 out.go:177] * Done! kubectl is now configured to use "embed-certs-731235" cluster and "default" namespace by default
	I0729 11:52:40.681693   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:42.685146   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.646240   70231 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:46.646305   70231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:46.646407   70231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:46.646537   70231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:46.646653   70231 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:46.646749   70231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:46.648483   70231 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:46.648572   70231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:46.648626   70231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:46.648719   70231 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:46.648820   70231 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:46.648941   70231 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:46.649013   70231 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:46.649068   70231 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:46.649121   70231 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:46.649182   70231 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:46.649248   70231 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:46.649294   70231 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:46.649378   70231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:46.649455   70231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:46.649529   70231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:46.649609   70231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:46.649693   70231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:46.649778   70231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:46.649912   70231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:46.650023   70231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:46.651575   70231 out.go:204]   - Booting up control plane ...
	I0729 11:52:46.651657   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:46.651723   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:46.651793   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:46.651893   70231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:46.651963   70231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:46.651996   70231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:46.652155   70231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:46.652258   70231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:46.652315   70231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00230111s
	I0729 11:52:46.652381   70231 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:46.652444   70231 kubeadm.go:310] [api-check] The API server is healthy after 5.502783682s
	I0729 11:52:46.652588   70231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:46.652734   70231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:46.652802   70231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:46.652991   70231 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-754486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:46.653041   70231 kubeadm.go:310] [bootstrap-token] Using token: 341fdm.tm8thttie16wi2qy
	I0729 11:52:46.654343   70231 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:46.654458   70231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:46.654555   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:46.654745   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:46.654914   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:46.655023   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:46.655094   70231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:46.655202   70231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:46.655242   70231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:46.655285   70231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:46.655293   70231 kubeadm.go:310] 
	I0729 11:52:46.655349   70231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:46.655355   70231 kubeadm.go:310] 
	I0729 11:52:46.655427   70231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:46.655433   70231 kubeadm.go:310] 
	I0729 11:52:46.655453   70231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:46.655509   70231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:46.655576   70231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:46.655586   70231 kubeadm.go:310] 
	I0729 11:52:46.655653   70231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:46.655660   70231 kubeadm.go:310] 
	I0729 11:52:46.655702   70231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:46.655708   70231 kubeadm.go:310] 
	I0729 11:52:46.655772   70231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:46.655861   70231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:46.655975   70231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:46.656000   70231 kubeadm.go:310] 
	I0729 11:52:46.656118   70231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:46.656223   70231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:46.656233   70231 kubeadm.go:310] 
	I0729 11:52:46.656344   70231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656477   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:46.656502   70231 kubeadm.go:310] 	--control-plane 
	I0729 11:52:46.656508   70231 kubeadm.go:310] 
	I0729 11:52:46.656580   70231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:46.656586   70231 kubeadm.go:310] 
	I0729 11:52:46.656669   70231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656831   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:46.656851   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:52:46.656862   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:46.659007   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:45.180215   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:47.181213   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.660238   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:46.671866   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:46.692991   70231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-754486 minikube.k8s.io/updated_at=2024_07_29T11_52_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=default-k8s-diff-port-754486 minikube.k8s.io/primary=true
	I0729 11:52:46.897228   70231 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:46.897373   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.398474   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.898225   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.397547   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.897716   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.398393   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.898110   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.680176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:51.680900   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:53.681105   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:50.397646   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:50.897618   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.398130   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.897444   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.398334   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.898233   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.397587   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.898255   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.397634   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.898138   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.182828   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:56.674072   69419 pod_ready.go:81] duration metric: took 4m0.000131876s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:56.674094   69419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:56.674113   69419 pod_ready.go:38] duration metric: took 4m9.054741116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:56.674144   69419 kubeadm.go:597] duration metric: took 4m16.587842765s to restartPrimaryControlPlane
	W0729 11:52:56.674197   69419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:56.674234   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:55.398096   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:55.897565   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.397785   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.897860   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.397925   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.897989   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.397500   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.897468   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.398228   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.483894   70231 kubeadm.go:1113] duration metric: took 12.790894124s to wait for elevateKubeSystemPrivileges
	I0729 11:52:59.483924   70231 kubeadm.go:394] duration metric: took 5m10.397319925s to StartCluster
	I0729 11:52:59.483941   70231 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.484019   70231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:59.485737   70231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.486008   70231 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:59.486074   70231 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:59.486163   70231 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486195   70231 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754486"
	I0729 11:52:59.486196   70231 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486210   70231 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486238   70231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754486"
	I0729 11:52:59.486251   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:59.486256   70231 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.486266   70231 addons.go:243] addon metrics-server should already be in state true
	W0729 11:52:59.486205   70231 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:59.486295   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486307   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486550   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486555   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486572   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486573   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486617   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.487888   70231 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:59.489501   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:59.502095   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0729 11:52:59.502614   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.502832   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0729 11:52:59.503207   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503229   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.503252   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.503805   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503829   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.504128   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504216   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504317   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.504801   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.504847   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.505348   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I0729 11:52:59.505701   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.506318   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.506342   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.506738   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.507261   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.507290   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.508065   70231 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.508084   70231 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:59.508111   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.508423   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.508462   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.526240   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0729 11:52:59.526269   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0729 11:52:59.526313   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0729 11:52:59.526654   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526763   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526826   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.527214   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527230   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527351   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527388   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527405   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527429   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527668   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527715   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527901   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.527931   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.528030   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.528913   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.528940   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.529836   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.530004   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.532077   70231 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:59.532101   70231 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:59.533597   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:59.533619   70231 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:59.533641   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.533645   70231 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:59.533659   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:59.533681   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.538047   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538082   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538654   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538669   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538679   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538686   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538693   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538864   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538889   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539065   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539239   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539237   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.539374   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.546505   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0729 11:52:59.546918   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.547428   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.547455   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.547790   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.548011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.549607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.549899   70231 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.549915   70231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:59.549934   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.553591   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.555251   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.555814   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.556005   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.556154   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.758973   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:59.809677   70231 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818208   70231 node_ready.go:49] node "default-k8s-diff-port-754486" has status "Ready":"True"
	I0729 11:52:59.818252   70231 node_ready.go:38] duration metric: took 8.523612ms for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818264   70231 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:59.825340   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:59.935053   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.954324   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:59.954346   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:59.962991   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:00.052728   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:00.052754   70231 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:00.168588   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.168620   70231 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:58.067350   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:52:58.067472   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:52:58.067690   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:00.230134   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.485028   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485062   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485424   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485447   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.485461   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485470   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485716   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485731   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.502040   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.502061   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.502386   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.502410   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.400774   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.437744399s)
	I0729 11:53:01.400842   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.400856   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401229   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401248   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.401284   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.401378   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.401387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401637   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401648   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408496   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.178316081s)
	I0729 11:53:01.408558   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408577   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.408859   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.408879   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408859   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.408904   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408917   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.409181   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.409218   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.409232   70231 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754486"
	I0729 11:53:01.409254   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.411682   70231 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 11:53:01.413048   70231 addons.go:510] duration metric: took 1.926975712s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 11:53:01.831515   70231 pod_ready.go:102] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:02.331492   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.331518   70231 pod_ready.go:81] duration metric: took 2.506145957s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.331530   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341152   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.341175   70231 pod_ready.go:81] duration metric: took 9.638268ms for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341183   70231 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346946   70231 pod_ready.go:92] pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.346971   70231 pod_ready.go:81] duration metric: took 5.77844ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346981   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351401   70231 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.351423   70231 pod_ready.go:81] duration metric: took 4.432109ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351435   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355410   70231 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.355428   70231 pod_ready.go:81] duration metric: took 3.986166ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355439   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729604   70231 pod_ready.go:92] pod "kube-proxy-7gkd8" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.729634   70231 pod_ready.go:81] duration metric: took 374.188296ms for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729653   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130027   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:03.130052   70231 pod_ready.go:81] duration metric: took 400.392433ms for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130061   70231 pod_ready.go:38] duration metric: took 3.311785643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:03.130077   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:03.130134   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:03.152134   70231 api_server.go:72] duration metric: took 3.666086394s to wait for apiserver process to appear ...
	I0729 11:53:03.152164   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:03.152188   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:53:03.157357   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:53:03.158235   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:53:03.158254   70231 api_server.go:131] duration metric: took 6.083486ms to wait for apiserver health ...
	I0729 11:53:03.158261   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:03.333517   70231 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:03.333547   70231 system_pods.go:61] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.333552   70231 system_pods.go:61] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.333556   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.333559   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.333563   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.333566   70231 system_pods.go:61] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.333568   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.333574   70231 system_pods.go:61] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.333577   70231 system_pods.go:61] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.333586   70231 system_pods.go:74] duration metric: took 175.319992ms to wait for pod list to return data ...
	I0729 11:53:03.333595   70231 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:03.529964   70231 default_sa.go:45] found service account: "default"
	I0729 11:53:03.529989   70231 default_sa.go:55] duration metric: took 196.388041ms for default service account to be created ...
	I0729 11:53:03.529998   70231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:03.733015   70231 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:03.733051   70231 system_pods.go:89] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.733058   70231 system_pods.go:89] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.733062   70231 system_pods.go:89] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.733066   70231 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.733070   70231 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.733075   70231 system_pods.go:89] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.733081   70231 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.733090   70231 system_pods.go:89] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.733097   70231 system_pods.go:89] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.733108   70231 system_pods.go:126] duration metric: took 203.104097ms to wait for k8s-apps to be running ...
	I0729 11:53:03.733121   70231 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:03.733165   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:03.749014   70231 system_svc.go:56] duration metric: took 15.886799ms WaitForService to wait for kubelet
	I0729 11:53:03.749045   70231 kubeadm.go:582] duration metric: took 4.263001752s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:03.749070   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:03.930356   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:03.930380   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:03.930390   70231 node_conditions.go:105] duration metric: took 181.31486ms to run NodePressure ...
	I0729 11:53:03.930399   70231 start.go:241] waiting for startup goroutines ...
	I0729 11:53:03.930406   70231 start.go:246] waiting for cluster config update ...
	I0729 11:53:03.930417   70231 start.go:255] writing updated cluster config ...
	I0729 11:53:03.930690   70231 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:03.984862   70231 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:53:03.986829   70231 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754486" cluster and "default" namespace by default
	I0729 11:53:03.068218   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:03.068464   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:13.068777   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:13.069011   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:23.088658   69419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.414400207s)
	I0729 11:53:23.088743   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:23.104735   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:53:23.115145   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:53:23.125890   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:53:23.125913   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:53:23.125969   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:53:23.136854   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:53:23.136914   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:53:23.148400   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:53:23.157595   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:53:23.157670   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:53:23.167281   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.177119   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:53:23.177176   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.187359   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:53:23.197033   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:53:23.197110   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:53:23.207490   69419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:53:23.254112   69419 kubeadm.go:310] W0729 11:53:23.231768    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.254983   69419 kubeadm.go:310] W0729 11:53:23.232599    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.383993   69419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:53:32.410305   69419 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 11:53:32.410378   69419 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:53:32.410483   69419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:53:32.410611   69419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:53:32.410758   69419 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 11:53:32.410840   69419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:53:32.412547   69419 out.go:204]   - Generating certificates and keys ...
	I0729 11:53:32.412651   69419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:53:32.412761   69419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:53:32.412879   69419 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:53:32.412973   69419 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:53:32.413101   69419 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:53:32.413176   69419 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:53:32.413228   69419 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:53:32.413279   69419 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:53:32.413346   69419 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:53:32.413427   69419 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:53:32.413482   69419 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:53:32.413577   69419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:53:32.413644   69419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:53:32.413717   69419 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:53:32.413795   69419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:53:32.413880   69419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:53:32.413970   69419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:53:32.414075   69419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:53:32.414167   69419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:53:32.415701   69419 out.go:204]   - Booting up control plane ...
	I0729 11:53:32.415817   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:53:32.415927   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:53:32.416034   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:53:32.416205   69419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:53:32.416312   69419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:53:32.416350   69419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:53:32.416466   69419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:53:32.416564   69419 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:53:32.416658   69419 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.786281ms
	I0729 11:53:32.416730   69419 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:53:32.416803   69419 kubeadm.go:310] [api-check] The API server is healthy after 5.501546935s
	I0729 11:53:32.416941   69419 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:53:32.417099   69419 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:53:32.417184   69419 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:53:32.417349   69419 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-297799 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:53:32.417434   69419 kubeadm.go:310] [bootstrap-token] Using token: 9fg92x.rq4eihzyqcflv0gj
	I0729 11:53:32.418783   69419 out.go:204]   - Configuring RBAC rules ...
	I0729 11:53:32.418899   69419 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:53:32.418969   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:53:32.419100   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:53:32.419239   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:53:32.419337   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:53:32.419423   69419 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:53:32.419544   69419 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:53:32.419594   69419 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:53:32.419633   69419 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:53:32.419639   69419 kubeadm.go:310] 
	I0729 11:53:32.419686   69419 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:53:32.419695   69419 kubeadm.go:310] 
	I0729 11:53:32.419756   69419 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:53:32.419762   69419 kubeadm.go:310] 
	I0729 11:53:32.419802   69419 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:53:32.419858   69419 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:53:32.419901   69419 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:53:32.419911   69419 kubeadm.go:310] 
	I0729 11:53:32.419965   69419 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:53:32.419971   69419 kubeadm.go:310] 
	I0729 11:53:32.420017   69419 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:53:32.420025   69419 kubeadm.go:310] 
	I0729 11:53:32.420072   69419 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:53:32.420137   69419 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:53:32.420200   69419 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:53:32.420205   69419 kubeadm.go:310] 
	I0729 11:53:32.420277   69419 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:53:32.420340   69419 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:53:32.420345   69419 kubeadm.go:310] 
	I0729 11:53:32.420416   69419 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420506   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:53:32.420531   69419 kubeadm.go:310] 	--control-plane 
	I0729 11:53:32.420544   69419 kubeadm.go:310] 
	I0729 11:53:32.420645   69419 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:53:32.420654   69419 kubeadm.go:310] 
	I0729 11:53:32.420765   69419 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420895   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:53:32.420911   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:53:32.420920   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:53:32.422438   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:53:32.423731   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:53:32.435581   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:53:32.457560   69419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:53:32.457665   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:32.457719   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-297799 minikube.k8s.io/updated_at=2024_07_29T11_53_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=no-preload-297799 minikube.k8s.io/primary=true
	I0729 11:53:32.486072   69419 ops.go:34] apiserver oom_adj: -16
	I0729 11:53:32.674003   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.174011   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.674077   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:34.174383   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.069886   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:33.070112   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:34.674510   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.174124   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.674135   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.174420   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.674370   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.787916   69419 kubeadm.go:1113] duration metric: took 4.330303492s to wait for elevateKubeSystemPrivileges
	I0729 11:53:36.787961   69419 kubeadm.go:394] duration metric: took 4m56.766239734s to StartCluster
	I0729 11:53:36.787983   69419 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.788071   69419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:53:36.790440   69419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.790747   69419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:53:36.790823   69419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:53:36.790914   69419 addons.go:69] Setting storage-provisioner=true in profile "no-preload-297799"
	I0729 11:53:36.790929   69419 addons.go:69] Setting default-storageclass=true in profile "no-preload-297799"
	I0729 11:53:36.790946   69419 addons.go:234] Setting addon storage-provisioner=true in "no-preload-297799"
	W0729 11:53:36.790956   69419 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:53:36.790970   69419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-297799"
	I0729 11:53:36.790963   69419 addons.go:69] Setting metrics-server=true in profile "no-preload-297799"
	I0729 11:53:36.790990   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791009   69419 addons.go:234] Setting addon metrics-server=true in "no-preload-297799"
	W0729 11:53:36.791023   69419 addons.go:243] addon metrics-server should already be in state true
	I0729 11:53:36.790938   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:53:36.791055   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791315   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791350   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791376   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791395   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791424   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791403   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.792400   69419 out.go:177] * Verifying Kubernetes components...
	I0729 11:53:36.793837   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:53:36.807811   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0729 11:53:36.807845   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0729 11:53:36.808292   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808347   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808844   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808863   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.808971   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808992   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.809204   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809364   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809708   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809727   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.809868   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809903   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.810196   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33633
	I0729 11:53:36.810602   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.811069   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.811085   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.811578   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.811789   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.815254   69419 addons.go:234] Setting addon default-storageclass=true in "no-preload-297799"
	W0729 11:53:36.815319   69419 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:53:36.815351   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.815722   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.815767   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.826661   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I0729 11:53:36.827259   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.827925   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.827947   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.828288   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.828475   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.829152   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0729 11:53:36.829483   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.829942   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.829954   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.830335   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.830448   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.830512   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.831779   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0729 11:53:36.832366   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.832499   69419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:53:36.832831   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.832843   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.833105   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.833659   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.833692   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.834047   69419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:36.834218   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:53:36.834243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.835105   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.837003   69419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:53:36.837668   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838105   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.838130   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838304   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:53:36.838322   69419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:53:36.838340   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.838347   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.838505   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.838661   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.838834   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.841306   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841724   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.841742   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841909   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.842081   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.842243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.842405   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.853959   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0729 11:53:36.854349   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.854825   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.854849   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.855184   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.855412   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.857073   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.857352   69419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:36.857363   69419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:53:36.857377   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.860376   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860804   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.860826   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860973   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.861121   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.861249   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.861352   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:37.000840   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:53:37.058535   69419 node_ready.go:35] waiting up to 6m0s for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069231   69419 node_ready.go:49] node "no-preload-297799" has status "Ready":"True"
	I0729 11:53:37.069260   69419 node_ready.go:38] duration metric: took 10.69136ms for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069272   69419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:37.080726   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:37.122837   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:37.154216   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:37.177797   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:53:37.177821   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:53:37.298520   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:37.298546   69419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:37.410911   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:37.410935   69419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:53:37.502799   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:38.337421   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.214547185s)
	I0729 11:53:38.337457   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.183203433s)
	I0729 11:53:38.337490   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337491   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337500   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337506   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337775   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337790   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337800   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337807   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337843   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.337844   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337865   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337873   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337880   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.338007   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338016   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338091   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338102   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338108   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.417894   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.417921   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.418225   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.418250   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.418272   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642279   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139432943s)
	I0729 11:53:38.642328   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642343   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642656   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642677   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642680   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642687   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642712   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642956   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642975   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642985   69419 addons.go:475] Verifying addon metrics-server=true in "no-preload-297799"
	I0729 11:53:38.642990   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.644958   69419 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 11:53:38.646417   69419 addons.go:510] duration metric: took 1.855596723s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 11:53:39.091531   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:41.587827   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.088096   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.586486   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.586510   69419 pod_ready.go:81] duration metric: took 7.505759998s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.586521   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591372   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.591394   69419 pod_ready.go:81] duration metric: took 4.865716ms for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591404   69419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596377   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.596401   69419 pod_ready.go:81] duration metric: took 4.988985ms for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596412   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603151   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.603176   69419 pod_ready.go:81] duration metric: took 6.75609ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603187   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609494   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.609514   69419 pod_ready.go:81] duration metric: took 6.319727ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609526   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984476   69419 pod_ready.go:92] pod "kube-proxy-blx4g" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.984505   69419 pod_ready.go:81] duration metric: took 374.971379ms for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984517   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385763   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:45.385792   69419 pod_ready.go:81] duration metric: took 401.266749ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385802   69419 pod_ready.go:38] duration metric: took 8.316518469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:45.385821   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:45.385887   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:45.404065   69419 api_server.go:72] duration metric: took 8.613282557s to wait for apiserver process to appear ...
	I0729 11:53:45.404093   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:45.404114   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:53:45.408027   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:53:45.408985   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:53:45.409011   69419 api_server.go:131] duration metric: took 4.91124ms to wait for apiserver health ...
	I0729 11:53:45.409020   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:45.587520   69419 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:45.587552   69419 system_pods.go:61] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.587556   69419 system_pods.go:61] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.587560   69419 system_pods.go:61] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.587563   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.587568   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.587571   69419 system_pods.go:61] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.587574   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.587580   69419 system_pods.go:61] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.587584   69419 system_pods.go:61] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.587590   69419 system_pods.go:74] duration metric: took 178.563924ms to wait for pod list to return data ...
	I0729 11:53:45.587596   69419 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:45.784611   69419 default_sa.go:45] found service account: "default"
	I0729 11:53:45.784640   69419 default_sa.go:55] duration metric: took 197.037896ms for default service account to be created ...
	I0729 11:53:45.784659   69419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:45.992937   69419 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:45.992973   69419 system_pods.go:89] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.992982   69419 system_pods.go:89] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.992990   69419 system_pods.go:89] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.992996   69419 system_pods.go:89] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.993003   69419 system_pods.go:89] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.993010   69419 system_pods.go:89] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.993017   69419 system_pods.go:89] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.993027   69419 system_pods.go:89] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.993037   69419 system_pods.go:89] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.993047   69419 system_pods.go:126] duration metric: took 208.382518ms to wait for k8s-apps to be running ...
	I0729 11:53:45.993059   69419 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:45.993109   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:46.012248   69419 system_svc.go:56] duration metric: took 19.180103ms WaitForService to wait for kubelet
	I0729 11:53:46.012284   69419 kubeadm.go:582] duration metric: took 9.221504322s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:46.012309   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:46.186674   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:46.186723   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:46.186736   69419 node_conditions.go:105] duration metric: took 174.422508ms to run NodePressure ...
	I0729 11:53:46.186747   69419 start.go:241] waiting for startup goroutines ...
	I0729 11:53:46.186753   69419 start.go:246] waiting for cluster config update ...
	I0729 11:53:46.186763   69419 start.go:255] writing updated cluster config ...
	I0729 11:53:46.187032   69419 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:46.236558   69419 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 11:53:46.239388   69419 out.go:177] * Done! kubectl is now configured to use "no-preload-297799" cluster and "default" namespace by default
	I0729 11:54:13.072567   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:13.072841   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:54:13.072861   70480 kubeadm.go:310] 
	I0729 11:54:13.072916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:54:13.072951   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:54:13.072959   70480 kubeadm.go:310] 
	I0729 11:54:13.072987   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:54:13.073016   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:54:13.073156   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:54:13.073181   70480 kubeadm.go:310] 
	I0729 11:54:13.073289   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:54:13.073328   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:54:13.073365   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:54:13.073373   70480 kubeadm.go:310] 
	I0729 11:54:13.073475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:54:13.073562   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:54:13.073572   70480 kubeadm.go:310] 
	I0729 11:54:13.073704   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:54:13.073835   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:54:13.073941   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:54:13.074115   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:54:13.074132   70480 kubeadm.go:310] 
	I0729 11:54:13.074287   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:54:13.074407   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:54:13.074523   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 11:54:13.074665   70480 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 11:54:13.074737   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:54:13.546511   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:54:13.564193   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:54:13.576259   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:54:13.576282   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:54:13.576325   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:54:13.586785   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:54:13.586846   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:54:13.597376   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:54:13.607259   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:54:13.607330   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:54:13.619236   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.630278   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:54:13.630341   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.640574   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:54:13.650526   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:54:13.650584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:54:13.660941   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:54:13.742767   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:54:13.742852   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:54:13.911296   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:54:13.911467   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:54:13.911607   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:54:14.123645   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:54:14.126464   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:54:14.126931   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:54:14.126995   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:54:14.127067   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:54:14.127146   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:54:14.127238   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:54:14.127328   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:54:14.127424   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:54:14.127506   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:54:14.129559   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:54:14.130588   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:54:14.130792   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:54:14.130931   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:54:14.373822   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:54:14.518174   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:54:14.850815   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:54:14.955918   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:54:14.979731   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:54:14.979884   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:54:14.979950   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:54:15.134480   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:54:15.137488   70480 out.go:204]   - Booting up control plane ...
	I0729 11:54:15.137635   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:54:15.150435   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:54:15.151842   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:54:15.152652   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:54:15.155022   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:54:55.157641   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:54:55.157771   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:55.158065   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:00.158857   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:00.159153   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:10.159857   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:30.160336   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:30.160554   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:56:10.159858   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159872   70480 kubeadm.go:310] 
	I0729 11:56:10.159916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:56:10.159968   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:56:10.159976   70480 kubeadm.go:310] 
	I0729 11:56:10.160004   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:56:10.160034   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:56:10.160140   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:56:10.160160   70480 kubeadm.go:310] 
	I0729 11:56:10.160286   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:56:10.160348   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:56:10.160378   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:56:10.160384   70480 kubeadm.go:310] 
	I0729 11:56:10.160475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:56:10.160547   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:56:10.160556   70480 kubeadm.go:310] 
	I0729 11:56:10.160652   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:56:10.160756   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:56:10.160878   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:56:10.160976   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:56:10.160985   70480 kubeadm.go:310] 
	I0729 11:56:10.162126   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:56:10.162224   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:56:10.162399   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 11:56:10.162415   70480 kubeadm.go:394] duration metric: took 7m59.00013135s to StartCluster
	I0729 11:56:10.162473   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:56:10.162606   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:56:10.208183   70480 cri.go:89] found id: ""
	I0729 11:56:10.208206   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.208214   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:56:10.208219   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:56:10.208275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:56:10.256265   70480 cri.go:89] found id: ""
	I0729 11:56:10.256293   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.256304   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:56:10.256445   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:56:10.256515   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:56:10.296625   70480 cri.go:89] found id: ""
	I0729 11:56:10.296668   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.296688   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:56:10.296704   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:56:10.296791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:56:10.333771   70480 cri.go:89] found id: ""
	I0729 11:56:10.333797   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.333824   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:56:10.333830   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:56:10.333887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:56:10.371746   70480 cri.go:89] found id: ""
	I0729 11:56:10.371772   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.371782   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:56:10.371789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:56:10.371850   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:56:10.408988   70480 cri.go:89] found id: ""
	I0729 11:56:10.409018   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.409028   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:56:10.409036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:56:10.409087   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:56:10.448706   70480 cri.go:89] found id: ""
	I0729 11:56:10.448731   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.448740   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:56:10.448749   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:56:10.448808   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:56:10.485549   70480 cri.go:89] found id: ""
	I0729 11:56:10.485577   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.485585   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:56:10.485594   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:56:10.485609   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:56:10.536953   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:56:10.536989   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:56:10.554870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:56:10.554908   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:56:10.675935   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:56:10.675966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:56:10.675983   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:56:10.792820   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:56:10.792858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 11:56:10.833493   70480 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 11:56:10.833537   70480 out.go:239] * 
	W0729 11:56:10.833616   70480 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.833644   70480 out.go:239] * 
	W0729 11:56:10.834607   70480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:56:10.838465   70480 out.go:177] 
	W0729 11:56:10.840213   70480 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.840257   70480 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 11:56:10.840282   70480 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 11:56:10.841869   70480 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.150240247Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4fd5708a054999c6ee13cc327c4d6922f140dc1bc5cc524d11457676954b28aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722253979845550519,StartedAt:1722253979896185142,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gkd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6699fd97-db3a-4ad9-911e-637b6401ba46,},Annotations:map[string]string{io.kubernetes.container.hash: 6bca9732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6699fd97-db3a-4ad9-911e-637b6401ba46/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6699fd97-db3a-4ad9-911e-637b6401ba46/containers/kube-proxy/b9fbd612,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,Hos
tPath:/var/lib/kubelet/pods/6699fd97-db3a-4ad9-911e-637b6401ba46/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/6699fd97-db3a-4ad9-911e-637b6401ba46/volumes/kubernetes.io~projected/kube-api-access-zpq27,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-7gkd8_6699fd97-db3a-4ad9-911e-637b6401ba46/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" f
ile="otel-collector/interceptors.go:74" id=3dd507e6-d6b7-4873-9852-8c71dda55d76 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.150677427Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:df8f676fc0fb26d0ed2616b226df641b93640d475fab5670b57b27d5ce63157b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=14d33d6c-e30f-480c-ab55-3ac0483574db name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.150858975Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:df8f676fc0fb26d0ed2616b226df641b93640d475fab5670b57b27d5ce63157b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1722253960290457603,StartedAt:1722253960377451780,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c15d3319712b79163fc19ca44c59aba0,},Annotations:map[string]string{io.kubernetes.container.hash: c7f54f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c15d3319712b79163fc19ca44c59aba0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c15d3319712b79163fc19ca44c59aba0/containers/etcd/b8479c09,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/
pods/kube-system_etcd-default-k8s-diff-port-754486_c15d3319712b79163fc19ca44c59aba0/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=14d33d6c-e30f-480c-ab55-3ac0483574db name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.151212112Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:eedcbcfb43e07be1975870cd0b80f100cd159ceed323091fcfb990b0ee5d8f9b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=dcf3a35e-21ad-40a9-920b-c3f3b99560c8 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.151291873Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:eedcbcfb43e07be1975870cd0b80f100cd159ceed323091fcfb990b0ee5d8f9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1722253960283667270,StartedAt:1722253960416485624,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27b76178b47939f93fc1d48704ba2f37,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/27b76178b47939f93fc1d48704ba2f37/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/27b76178b47939f93fc1d48704ba2f37/containers/kube-scheduler/8778daa9,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-754486_27b76178b47939f93fc1d48704ba2f37/kube-scheduler/2.log,Resources:&ContainerResources{Lin
ux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=dcf3a35e-21ad-40a9-920b-c3f3b99560c8 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.151697564Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0f3bba7db5b3ea6dc447a15455dde54c6e86f41acb55ddad7797709becaf8a1c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ea686494-1da6-4d77-9841-b85234632efe name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.151828276Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0f3bba7db5b3ea6dc447a15455dde54c6e86f41acb55ddad7797709becaf8a1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1722253960195331785,StartedAt:1722253960295644118,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6bbcf80b45c58306dbbe42a634562a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8d6bbcf80b45c58306dbbe42a634562a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8d6bbcf80b45c58306dbbe42a634562a/containers/kube-controller-manager/244c9cc3,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagati
on:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-default-k8s-diff-port-754486_8d6bbcf80b45c58306dbbe42a634562a/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,Oom
ScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ea686494-1da6-4d77-9841-b85234632efe name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.152252826Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5d436678cc067136b81bf3d2656543dbef39d25fa24f17f5061972a0a9fd61a6,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ab7d19da-4c51-4c86-abfa-0e1a6fbb254f name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.152340088Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5d436678cc067136b81bf3d2656543dbef39d25fa24f17f5061972a0a9fd61a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1722253960129276295,StartedAt:1722253960240528895,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e218a76b4d6e35d9928c77efd7ba3b21,},Annotations:map[string]string{io.kubernetes.container.hash: e6d1e29d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e218a76b4d6e35d9928c77efd7ba3b21/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e218a76b4d6e35d9928c77efd7ba3b21/containers/kube-apiserver/f653e5fa,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapp
ing{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-754486_e218a76b4d6e35d9928c77efd7ba3b21/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ab7d19da-4c51-4c86-abfa-0e1a6fbb254f name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.192862605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=981a4061-fd0a-434a-a41e-c9c24aed279c name=/runtime.v1.RuntimeService/Version
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.192943078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=981a4061-fd0a-434a-a41e-c9c24aed279c name=/runtime.v1.RuntimeService/Version
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.193920958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=734d62a6-bc42-41f2-a904-8c8df7bef4d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.194419749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254526194392216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=734d62a6-bc42-41f2-a904-8c8df7bef4d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.195103807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da5b14fb-0348-4a28-82bf-0772e14581b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.195167372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da5b14fb-0348-4a28-82bf-0772e14581b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.195422511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2548381a4637ac98fd22c06d25f8d5e22cf98f860a519fb015f39ea86567e99f,PodSandboxId:fc62c0b588aa7e9360f5e2d2bd7fcbb5831c62778235d28ac1116339aca2968b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253981784726638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5b7866-e0f0-4f25-a9d9-0eba38db9e76,},Annotations:map[string]string{io.kubernetes.container.hash: 62d389d4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bb20ba9479b3b1a793ab8227ccb4677960dedd830fc61fb840bbbd1109298a,PodSandboxId:f2fc618fbe77c03e0457127c411ae9779e8f175896c4a782dc37831040af25bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980816486562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zl6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1182fef3-3604-44e8-b428-97677c9b1e72,},Annotations:map[string]string{io.kubernetes.container.hash: 647cdee1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ef4e8748fa949f582efd65be5571462046812cdfa90b29c6b5240da694f63d,PodSandboxId:1af5452099f0f6b0c41a1b64f1f45a96fbcfcffface147c8d32f1084b4c05721,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980730178617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbcqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0e2834a2-a70d-4770-9f11-679f711a0207,},Annotations:map[string]string{io.kubernetes.container.hash: 8b95219e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd5708a054999c6ee13cc327c4d6922f140dc1bc5cc524d11457676954b28aa,PodSandboxId:78c72d80c552f2ca78c3f47eaef55d343aafd3d017e41a379a14b895160778ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722253979780671629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gkd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6699fd97-db3a-4ad9-911e-637b6401ba46,},Annotations:map[string]string{io.kubernetes.container.hash: 6bca9732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df8f676fc0fb26d0ed2616b226df641b93640d475fab5670b57b27d5ce63157b,PodSandboxId:4d87784eb928ac66d1bfe607d53025d0e52180b2b2766c558d323bec0e97b546,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253960129192327,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c15d3319712b79163fc19ca44c59aba0,},Annotations:map[string]string{io.kubernetes.container.hash: c7f54f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eedcbcfb43e07be1975870cd0b80f100cd159ceed323091fcfb990b0ee5d8f9b,PodSandboxId:c31d3dd5298f3f3bf1de00cbc257f13727ffdf3fe81464c9a1b2ceeb8bde1b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253960163726774,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27b76178b47939f93fc1d48704ba2f37,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bba7db5b3ea6dc447a15455dde54c6e86f41acb55ddad7797709becaf8a1c,PodSandboxId:5ee74588375fcf14a4bfb37999c8cf22ce17a7641845a53d43e9e74fe8bc0e83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253960084081570,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6bbcf80b45c58306dbbe42a634562a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d436678cc067136b81bf3d2656543dbef39d25fa24f17f5061972a0a9fd61a6,PodSandboxId:0fed01a0da3e0d2b9d8079f95ce2bcd779d34a2e4dea5d122571aaca744e2475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253960057051949,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e218a76b4d6e35d9928c77efd7ba3b21,},Annotations:map[string]string{io.kubernetes.container.hash: e6d1e29d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da5b14fb-0348-4a28-82bf-0772e14581b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.203856040Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=47f0d106-43cf-4568-a657-97e4ca1605d6 name=/runtime.v1.RuntimeService/Status
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.203950189Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=47f0d106-43cf-4568-a657-97e4ca1605d6 name=/runtime.v1.RuntimeService/Status
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.234412596Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9723a215-e189-42a2-8c66-76a014b47430 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.234486500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9723a215-e189-42a2-8c66-76a014b47430 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.235859352Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67b61550-0bce-4336-83d9-2ddbe6cadbe6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.236319120Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254526236286556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67b61550-0bce-4336-83d9-2ddbe6cadbe6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.237041700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1ca4a22-bc34-45f5-af56-3c9231ba5a06 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.237097249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1ca4a22-bc34-45f5-af56-3c9231ba5a06 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:06 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:02:06.237294220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2548381a4637ac98fd22c06d25f8d5e22cf98f860a519fb015f39ea86567e99f,PodSandboxId:fc62c0b588aa7e9360f5e2d2bd7fcbb5831c62778235d28ac1116339aca2968b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253981784726638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5b7866-e0f0-4f25-a9d9-0eba38db9e76,},Annotations:map[string]string{io.kubernetes.container.hash: 62d389d4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bb20ba9479b3b1a793ab8227ccb4677960dedd830fc61fb840bbbd1109298a,PodSandboxId:f2fc618fbe77c03e0457127c411ae9779e8f175896c4a782dc37831040af25bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980816486562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zl6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1182fef3-3604-44e8-b428-97677c9b1e72,},Annotations:map[string]string{io.kubernetes.container.hash: 647cdee1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ef4e8748fa949f582efd65be5571462046812cdfa90b29c6b5240da694f63d,PodSandboxId:1af5452099f0f6b0c41a1b64f1f45a96fbcfcffface147c8d32f1084b4c05721,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980730178617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbcqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0e2834a2-a70d-4770-9f11-679f711a0207,},Annotations:map[string]string{io.kubernetes.container.hash: 8b95219e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd5708a054999c6ee13cc327c4d6922f140dc1bc5cc524d11457676954b28aa,PodSandboxId:78c72d80c552f2ca78c3f47eaef55d343aafd3d017e41a379a14b895160778ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722253979780671629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gkd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6699fd97-db3a-4ad9-911e-637b6401ba46,},Annotations:map[string]string{io.kubernetes.container.hash: 6bca9732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df8f676fc0fb26d0ed2616b226df641b93640d475fab5670b57b27d5ce63157b,PodSandboxId:4d87784eb928ac66d1bfe607d53025d0e52180b2b2766c558d323bec0e97b546,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253960129192327,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c15d3319712b79163fc19ca44c59aba0,},Annotations:map[string]string{io.kubernetes.container.hash: c7f54f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eedcbcfb43e07be1975870cd0b80f100cd159ceed323091fcfb990b0ee5d8f9b,PodSandboxId:c31d3dd5298f3f3bf1de00cbc257f13727ffdf3fe81464c9a1b2ceeb8bde1b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253960163726774,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27b76178b47939f93fc1d48704ba2f37,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bba7db5b3ea6dc447a15455dde54c6e86f41acb55ddad7797709becaf8a1c,PodSandboxId:5ee74588375fcf14a4bfb37999c8cf22ce17a7641845a53d43e9e74fe8bc0e83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253960084081570,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6bbcf80b45c58306dbbe42a634562a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d436678cc067136b81bf3d2656543dbef39d25fa24f17f5061972a0a9fd61a6,PodSandboxId:0fed01a0da3e0d2b9d8079f95ce2bcd779d34a2e4dea5d122571aaca744e2475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253960057051949,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e218a76b4d6e35d9928c77efd7ba3b21,},Annotations:map[string]string{io.kubernetes.container.hash: e6d1e29d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1ca4a22-bc34-45f5-af56-3c9231ba5a06 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2548381a4637a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   fc62c0b588aa7       storage-provisioner
	43bb20ba9479b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   f2fc618fbe77c       coredns-7db6d8ff4d-4zl6p
	f2ef4e8748fa9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1af5452099f0f       coredns-7db6d8ff4d-fbcqh
	4fd5708a05499       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   78c72d80c552f       kube-proxy-7gkd8
	eedcbcfb43e07       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   c31d3dd5298f3       kube-scheduler-default-k8s-diff-port-754486
	df8f676fc0fb2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   4d87784eb928a       etcd-default-k8s-diff-port-754486
	0f3bba7db5b3e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   5ee74588375fc       kube-controller-manager-default-k8s-diff-port-754486
	5d436678cc067       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   0fed01a0da3e0       kube-apiserver-default-k8s-diff-port-754486
	
	
	==> coredns [43bb20ba9479b3b1a793ab8227ccb4677960dedd830fc61fb840bbbd1109298a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f2ef4e8748fa949f582efd65be5571462046812cdfa90b29c6b5240da694f63d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-754486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-754486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=default-k8s-diff-port-754486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_52_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-754486
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:01:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:58:12 +0000   Mon, 29 Jul 2024 11:52:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:58:12 +0000   Mon, 29 Jul 2024 11:52:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:58:12 +0000   Mon, 29 Jul 2024 11:52:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:58:12 +0000   Mon, 29 Jul 2024 11:52:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.111
	  Hostname:    default-k8s-diff-port-754486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d99557284a0142b5a46816e2f198f833
	  System UUID:                d9955728-4a01-42b5-a468-16e2f198f833
	  Boot ID:                    76398773-4aec-4953-b7f8-29c936d15aff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-4zl6p                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-fbcqh                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-754486                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-754486             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-754486    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-7gkd8                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-754486             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-rgzfc                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node default-k8s-diff-port-754486 event: Registered Node default-k8s-diff-port-754486 in Controller
	
	
	==> dmesg <==
	[  +0.050893] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042268] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.953820] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.551385] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.584252] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.143496] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.058760] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065046] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.174638] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.149018] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.331400] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.601861] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.061440] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.847386] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +5.677785] kauditd_printk_skb: 97 callbacks suppressed
	[Jul29 11:48] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 11:52] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.763542] systemd-fstab-generator[3586]: Ignoring "noauto" option for root device
	[  +4.878992] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.193480] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[ +13.909823] systemd-fstab-generator[4112]: Ignoring "noauto" option for root device
	[  +0.085256] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 11:54] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [df8f676fc0fb26d0ed2616b226df641b93640d475fab5670b57b27d5ce63157b] <==
	{"level":"info","ts":"2024-07-29T11:52:40.585537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56d480afbf0abc79 switched to configuration voters=(6256767274637245561)"}
	{"level":"info","ts":"2024-07-29T11:52:40.585855Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d094f4edb090bf55","local-member-id":"56d480afbf0abc79","added-peer-id":"56d480afbf0abc79","added-peer-peer-urls":["https://192.168.50.111:2380"]}
	{"level":"info","ts":"2024-07-29T11:52:40.608384Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T11:52:40.608646Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.111:2380"}
	{"level":"info","ts":"2024-07-29T11:52:40.608693Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.111:2380"}
	{"level":"info","ts":"2024-07-29T11:52:40.614268Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"56d480afbf0abc79","initial-advertise-peer-urls":["https://192.168.50.111:2380"],"listen-peer-urls":["https://192.168.50.111:2380"],"advertise-client-urls":["https://192.168.50.111:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.111:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:52:40.614343Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T11:52:41.239641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56d480afbf0abc79 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T11:52:41.239761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56d480afbf0abc79 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T11:52:41.239784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56d480afbf0abc79 received MsgPreVoteResp from 56d480afbf0abc79 at term 1"}
	{"level":"info","ts":"2024-07-29T11:52:41.239798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56d480afbf0abc79 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:41.239821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56d480afbf0abc79 received MsgVoteResp from 56d480afbf0abc79 at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:41.239832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56d480afbf0abc79 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:41.239839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 56d480afbf0abc79 elected leader 56d480afbf0abc79 at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:41.243494Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"56d480afbf0abc79","local-member-attributes":"{Name:default-k8s-diff-port-754486 ClientURLs:[https://192.168.50.111:2379]}","request-path":"/0/members/56d480afbf0abc79/attributes","cluster-id":"d094f4edb090bf55","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:52:41.243792Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:41.243942Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:52:41.246655Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:52:41.246746Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:52:41.246824Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d094f4edb090bf55","local-member-id":"56d480afbf0abc79","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:41.246938Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:41.246975Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:41.246986Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:52:41.257963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.111:2379"}
	{"level":"info","ts":"2024-07-29T11:52:41.296061Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:02:06 up 14 min,  0 users,  load average: 0.30, 0.20, 0.17
	Linux default-k8s-diff-port-754486 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5d436678cc067136b81bf3d2656543dbef39d25fa24f17f5061972a0a9fd61a6] <==
	I0729 11:56:01.981231       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 11:57:43.169513       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 11:57:43.169691       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 11:57:44.170286       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 11:57:44.170350       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 11:57:44.170360       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 11:57:44.170489       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 11:57:44.170625       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 11:57:44.171881       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 11:58:44.170555       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 11:58:44.170702       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 11:58:44.170715       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 11:58:44.173081       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 11:58:44.173164       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 11:58:44.173178       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:00:44.171751       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:00:44.171858       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 12:00:44.171906       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:00:44.174143       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:00:44.174292       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 12:00:44.174322       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0f3bba7db5b3ea6dc447a15455dde54c6e86f41acb55ddad7797709becaf8a1c] <==
	I0729 11:56:29.154964       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:56:58.695181       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:56:59.163394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:57:28.703179       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:57:29.171521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:57:58.708629       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:57:59.179889       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:58:28.713740       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:58:29.187260       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:58:58.719141       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:58:59.195115       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 11:58:59.995912       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.483801ms"
	I0729 11:59:14.989035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="104.333µs"
	E0729 11:59:28.724921       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:59:29.206146       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:59:58.730671       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 11:59:59.214213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:00:28.736236       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:00:29.223649       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:00:58.741814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:00:59.232061       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:01:28.746848       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:01:29.241539       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:01:58.752099       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:01:59.250357       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4fd5708a054999c6ee13cc327c4d6922f140dc1bc5cc524d11457676954b28aa] <==
	I0729 11:53:00.034338       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:53:00.052328       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.111"]
	I0729 11:53:00.163925       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:53:00.163962       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:53:00.163978       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:53:00.174728       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:53:00.174932       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:53:00.174962       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:53:00.176020       1 config.go:192] "Starting service config controller"
	I0729 11:53:00.176045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:53:00.176092       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:53:00.176096       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:53:00.180405       1 config.go:319] "Starting node config controller"
	I0729 11:53:00.180418       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:53:00.277051       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:53:00.277152       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:53:00.280514       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [eedcbcfb43e07be1975870cd0b80f100cd159ceed323091fcfb990b0ee5d8f9b] <==
	W0729 11:52:44.112760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:52:44.112894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 11:52:44.133346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:44.133500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:44.180985       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:52:44.181210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:52:44.204215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:52:44.204309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:52:44.308317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:52:44.308411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:52:44.408187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:44.408237       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:44.432330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:44.434162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:44.450876       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:52:44.451031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 11:52:44.451177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:52:44.451261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 11:52:44.496771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:52:44.496914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:52:44.527924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:52:44.528025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 11:52:44.658045       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:52:44.658114       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 11:52:46.478527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 11:59:46 default-k8s-diff-port-754486 kubelet[3920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:59:46 default-k8s-diff-port-754486 kubelet[3920]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:59:46 default-k8s-diff-port-754486 kubelet[3920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:59:46 default-k8s-diff-port-754486 kubelet[3920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:59:50 default-k8s-diff-port-754486 kubelet[3920]: E0729 11:59:50.972610    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:00:01 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:00:01.972359    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:00:14 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:00:14.972169    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:00:25 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:00:25.973146    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:00:39 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:00:39.972715    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:00:46 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:00:46.005496    3920 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:00:46 default-k8s-diff-port-754486 kubelet[3920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:00:46 default-k8s-diff-port-754486 kubelet[3920]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:00:46 default-k8s-diff-port-754486 kubelet[3920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:00:46 default-k8s-diff-port-754486 kubelet[3920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:00:51 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:00:51.971783    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:01:05 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:01:05.972659    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:01:20 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:01:20.971752    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:01:33 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:01:33.973294    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:01:46 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:01:46.006971    3920 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:01:46 default-k8s-diff-port-754486 kubelet[3920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:01:46 default-k8s-diff-port-754486 kubelet[3920]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:01:46 default-k8s-diff-port-754486 kubelet[3920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:01:46 default-k8s-diff-port-754486 kubelet[3920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:01:47 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:01:47.972916    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:02:00 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:02:00.971774    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	
	
	==> storage-provisioner [2548381a4637ac98fd22c06d25f8d5e22cf98f860a519fb015f39ea86567e99f] <==
	I0729 11:53:01.894174       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:53:01.907339       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:53:01.907483       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:53:01.918488       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:53:01.918810       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-754486_9e96901f-bf90-47f8-ae14-c72364303655!
	I0729 11:53:01.922051       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0660db1c-300d-466a-9dbd-76ccadc16e39", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-754486_9e96901f-bf90-47f8-ae14-c72364303655 became leader
	I0729 11:53:02.021065       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-754486_9e96901f-bf90-47f8-ae14-c72364303655!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-754486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-rgzfc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-754486 describe pod metrics-server-569cc877fc-rgzfc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-754486 describe pod metrics-server-569cc877fc-rgzfc: exit status 1 (65.029919ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-rgzfc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-754486 describe pod metrics-server-569cc877fc-rgzfc: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 11:54:23.690741   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:54:57.915995   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 11:55:36.672600   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-297799 -n no-preload-297799
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 12:02:46.760179227 +0000 UTC m=+6144.843907704
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-297799 -n no-preload-297799
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-297799 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-297799 logs -n 25: (2.249587323s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184479 sudo cat                              | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo find                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo crio                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184479                                       | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-574387 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | disable-driver-mounts-574387                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:41 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-297799             | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-297799                                   | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-731235            | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC | 29 Jul 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-754486  | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC | 29 Jul 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-297799                  | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-297799 --memory=2200                     | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC | 29 Jul 24 11:53 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-188043        | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-731235                 | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC | 29 Jul 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754486       | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:53 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-188043             | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:44:35.218497   70480 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:44:35.218591   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218595   70480 out.go:304] Setting ErrFile to fd 2...
	I0729 11:44:35.218599   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218821   70480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:44:35.219357   70480 out.go:298] Setting JSON to false
	I0729 11:44:35.220249   70480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5221,"bootTime":1722248254,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:44:35.220304   70480 start.go:139] virtualization: kvm guest
	I0729 11:44:35.222361   70480 out.go:177] * [old-k8s-version-188043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:44:35.223849   70480 notify.go:220] Checking for updates...
	I0729 11:44:35.223857   70480 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:44:35.225068   70480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:44:35.226167   70480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:44:35.227448   70480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:44:35.228516   70480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:44:35.229771   70480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:44:35.231218   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:44:35.231601   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.231634   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.246457   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0729 11:44:35.246861   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.247335   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.247371   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.247712   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.247899   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.249642   70480 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 11:44:35.250805   70480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:44:35.251114   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.251152   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.265900   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0729 11:44:35.266309   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.266767   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.266794   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.267099   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.267293   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.302182   70480 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:44:35.303575   70480 start.go:297] selected driver: kvm2
	I0729 11:44:35.303593   70480 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.303689   70480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:44:35.304334   70480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.304396   70480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:44:35.319674   70480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:44:35.320023   70480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:44:35.320054   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:44:35.320061   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:44:35.320111   70480 start.go:340] cluster config:
	{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.320210   70480 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.322089   70480 out.go:177] * Starting "old-k8s-version-188043" primary control-plane node in "old-k8s-version-188043" cluster
	I0729 11:44:38.643004   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:35.323173   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:44:35.323208   70480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 11:44:35.323221   70480 cache.go:56] Caching tarball of preloaded images
	I0729 11:44:35.323282   70480 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:44:35.323291   70480 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 11:44:35.323390   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:44:35.323557   70480 start.go:360] acquireMachinesLock for old-k8s-version-188043: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:44:41.714983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:47.794983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:50.867015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:56.946962   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:00.019017   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:06.099000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:09.171008   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:15.250989   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:18.322956   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:24.403015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:27.474951   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:33.554944   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:36.627002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:42.706993   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:45.779000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:51.858998   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:54.931013   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:01.011021   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:04.082938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:10.162988   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:13.235043   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:19.314994   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:22.386953   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:28.467078   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:31.539011   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:37.618990   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:40.690995   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:46.770999   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:49.842938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:55.923002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:58.994960   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:47:01.999190   69907 start.go:364] duration metric: took 3m42.920247555s to acquireMachinesLock for "embed-certs-731235"
	I0729 11:47:01.999237   69907 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:01.999244   69907 fix.go:54] fixHost starting: 
	I0729 11:47:01.999548   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:01.999574   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:02.014481   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I0729 11:47:02.014934   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:02.015374   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:47:02.015392   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:02.015726   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:02.015911   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:02.016062   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:47:02.017570   69907 fix.go:112] recreateIfNeeded on embed-certs-731235: state=Stopped err=<nil>
	I0729 11:47:02.017606   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	W0729 11:47:02.017758   69907 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:02.020459   69907 out.go:177] * Restarting existing kvm2 VM for "embed-certs-731235" ...
	I0729 11:47:02.021770   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Start
	I0729 11:47:02.021904   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring networks are active...
	I0729 11:47:02.022551   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network default is active
	I0729 11:47:02.022943   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network mk-embed-certs-731235 is active
	I0729 11:47:02.023347   69907 main.go:141] libmachine: (embed-certs-731235) Getting domain xml...
	I0729 11:47:02.023972   69907 main.go:141] libmachine: (embed-certs-731235) Creating domain...
	I0729 11:47:03.233906   69907 main.go:141] libmachine: (embed-certs-731235) Waiting to get IP...
	I0729 11:47:03.234807   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.235200   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.235266   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.235191   70997 retry.go:31] will retry after 267.737911ms: waiting for machine to come up
	I0729 11:47:03.504861   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.505460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.505485   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.505418   70997 retry.go:31] will retry after 246.310337ms: waiting for machine to come up
	I0729 11:47:03.753068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.753558   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.753587   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.753520   70997 retry.go:31] will retry after 374.497339ms: waiting for machine to come up
	I0729 11:47:01.996514   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:01.996575   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.996873   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:47:01.996897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.997094   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:47:01.999070   69419 machine.go:97] duration metric: took 4m37.426222817s to provisionDockerMachine
	I0729 11:47:01.999113   69419 fix.go:56] duration metric: took 4m37.448019985s for fixHost
	I0729 11:47:01.999122   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 4m37.448042995s
	W0729 11:47:01.999140   69419 start.go:714] error starting host: provision: host is not running
	W0729 11:47:01.999247   69419 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 11:47:01.999257   69419 start.go:729] Will try again in 5 seconds ...
	I0729 11:47:04.130170   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.130603   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.130625   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.130557   70997 retry.go:31] will retry after 500.810762ms: waiting for machine to come up
	I0729 11:47:04.632773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.633142   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.633196   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.633094   70997 retry.go:31] will retry after 499.805121ms: waiting for machine to come up
	I0729 11:47:05.135101   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.135685   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.135714   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.135610   70997 retry.go:31] will retry after 713.805425ms: waiting for machine to come up
	I0729 11:47:05.850525   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.850950   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.850979   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.850918   70997 retry.go:31] will retry after 940.40593ms: waiting for machine to come up
	I0729 11:47:06.792982   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:06.793406   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:06.793433   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:06.793344   70997 retry.go:31] will retry after 1.216752167s: waiting for machine to come up
	I0729 11:47:08.012264   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:08.012748   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:08.012773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:08.012692   70997 retry.go:31] will retry after 1.729849311s: waiting for machine to come up
	I0729 11:47:07.000812   69419 start.go:360] acquireMachinesLock for no-preload-297799: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:47:09.743735   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:09.744125   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:09.744144   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:09.744101   70997 retry.go:31] will retry after 2.251271574s: waiting for machine to come up
	I0729 11:47:11.998663   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:11.999213   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:11.999255   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:11.999163   70997 retry.go:31] will retry after 2.400718693s: waiting for machine to come up
	I0729 11:47:14.401005   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:14.401419   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:14.401442   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:14.401352   70997 retry.go:31] will retry after 3.073847413s: waiting for machine to come up
	I0729 11:47:17.477026   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:17.477424   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:17.477460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:17.477352   70997 retry.go:31] will retry after 3.28522497s: waiting for machine to come up
	I0729 11:47:22.076091   70231 start.go:364] duration metric: took 3m11.794715554s to acquireMachinesLock for "default-k8s-diff-port-754486"
	I0729 11:47:22.076162   70231 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:22.076177   70231 fix.go:54] fixHost starting: 
	I0729 11:47:22.076605   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:22.076644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:22.096370   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45159
	I0729 11:47:22.096731   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:22.097267   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:47:22.097296   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:22.097603   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:22.097812   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:22.097983   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:47:22.099583   70231 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754486: state=Stopped err=<nil>
	I0729 11:47:22.099607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	W0729 11:47:22.099762   70231 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:22.101982   70231 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754486" ...
	I0729 11:47:20.766989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767519   69907 main.go:141] libmachine: (embed-certs-731235) Found IP for machine: 192.168.61.202
	I0729 11:47:20.767544   69907 main.go:141] libmachine: (embed-certs-731235) Reserving static IP address...
	I0729 11:47:20.767560   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has current primary IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767996   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.768025   69907 main.go:141] libmachine: (embed-certs-731235) DBG | skip adding static IP to network mk-embed-certs-731235 - found existing host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"}
	I0729 11:47:20.768043   69907 main.go:141] libmachine: (embed-certs-731235) Reserved static IP address: 192.168.61.202
	I0729 11:47:20.768060   69907 main.go:141] libmachine: (embed-certs-731235) Waiting for SSH to be available...
	I0729 11:47:20.768068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Getting to WaitForSSH function...
	I0729 11:47:20.770325   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770639   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.770667   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770863   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH client type: external
	I0729 11:47:20.770894   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa (-rw-------)
	I0729 11:47:20.770927   69907 main.go:141] libmachine: (embed-certs-731235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:20.770943   69907 main.go:141] libmachine: (embed-certs-731235) DBG | About to run SSH command:
	I0729 11:47:20.770960   69907 main.go:141] libmachine: (embed-certs-731235) DBG | exit 0
	I0729 11:47:20.895074   69907 main.go:141] libmachine: (embed-certs-731235) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:20.895473   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetConfigRaw
	I0729 11:47:20.896121   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:20.898342   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.898673   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.898717   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.899017   69907 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/config.json ...
	I0729 11:47:20.899239   69907 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:20.899262   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:20.899464   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:20.901688   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902056   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.902099   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902249   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:20.902412   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902579   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902715   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:20.902857   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:20.903102   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:20.903118   69907 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:21.007368   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:21.007403   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007682   69907 buildroot.go:166] provisioning hostname "embed-certs-731235"
	I0729 11:47:21.007708   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007928   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.010883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011268   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.011308   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011465   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.011634   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011779   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011950   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.012121   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.012314   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.012334   69907 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-731235 && echo "embed-certs-731235" | sudo tee /etc/hostname
	I0729 11:47:21.129877   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-731235
	
	I0729 11:47:21.129907   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.133055   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133390   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.133411   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133614   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.133806   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.133977   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.134156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.134317   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.134480   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.134495   69907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-731235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-731235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-731235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:21.247997   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:21.248029   69907 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:21.248056   69907 buildroot.go:174] setting up certificates
	I0729 11:47:21.248067   69907 provision.go:84] configureAuth start
	I0729 11:47:21.248075   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.248361   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:21.251377   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251711   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.251738   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251908   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.254107   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254493   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.254521   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254721   69907 provision.go:143] copyHostCerts
	I0729 11:47:21.254788   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:21.254801   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:21.254896   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:21.255008   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:21.255019   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:21.255058   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:21.255138   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:21.255148   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:21.255183   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:21.255257   69907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.embed-certs-731235 san=[127.0.0.1 192.168.61.202 embed-certs-731235 localhost minikube]
	I0729 11:47:21.398780   69907 provision.go:177] copyRemoteCerts
	I0729 11:47:21.398833   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:21.398858   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.401840   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402259   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.402282   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402483   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.402661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.402992   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.403139   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.484883   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 11:47:21.509042   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:47:21.532327   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:21.556013   69907 provision.go:87] duration metric: took 307.934726ms to configureAuth
	I0729 11:47:21.556040   69907 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:21.556258   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:21.556337   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.558962   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559347   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.559372   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559518   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.559699   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.559861   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.560004   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.560157   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.560337   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.560356   69907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:21.834240   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:21.834270   69907 machine.go:97] duration metric: took 935.015622ms to provisionDockerMachine
	I0729 11:47:21.834284   69907 start.go:293] postStartSetup for "embed-certs-731235" (driver="kvm2")
	I0729 11:47:21.834299   69907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:21.834325   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:21.834638   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:21.834671   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.837313   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837712   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.837751   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837857   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.838022   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.838229   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.838357   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.922275   69907 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:21.926932   69907 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:21.926955   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:21.927027   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:21.927136   69907 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:21.927219   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:21.937122   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:21.964493   69907 start.go:296] duration metric: took 130.192874ms for postStartSetup
	I0729 11:47:21.964533   69907 fix.go:56] duration metric: took 19.965288806s for fixHost
	I0729 11:47:21.964554   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.967318   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967652   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.967682   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967850   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.968066   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968222   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968356   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.968509   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.968717   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.968731   69907 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:22.075873   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253642.050121254
	
	I0729 11:47:22.075893   69907 fix.go:216] guest clock: 1722253642.050121254
	I0729 11:47:22.075900   69907 fix.go:229] Guest: 2024-07-29 11:47:22.050121254 +0000 UTC Remote: 2024-07-29 11:47:21.964537244 +0000 UTC m=+243.027106048 (delta=85.58401ms)
	I0729 11:47:22.075927   69907 fix.go:200] guest clock delta is within tolerance: 85.58401ms
	I0729 11:47:22.075933   69907 start.go:83] releasing machines lock for "embed-certs-731235", held for 20.076714897s
	I0729 11:47:22.075958   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.076265   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:22.079236   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079566   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.079604   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079771   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080311   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080491   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080573   69907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:22.080644   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.080719   69907 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:22.080743   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.083401   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083438   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083743   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083904   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083917   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084061   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084378   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084389   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084565   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084573   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.084691   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.188025   69907 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:22.194866   69907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:22.344382   69907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:22.350719   69907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:22.350809   69907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:22.371783   69907 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:22.371814   69907 start.go:495] detecting cgroup driver to use...
	I0729 11:47:22.371874   69907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:22.387899   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:22.401722   69907 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:22.401790   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:22.415295   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:22.429209   69907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:22.541230   69907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:22.705734   69907 docker.go:233] disabling docker service ...
	I0729 11:47:22.705811   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:22.720716   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:22.736719   69907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:22.865574   69907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:22.994470   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:23.018115   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:23.037125   69907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:23.037210   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.048702   69907 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:23.048768   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.061785   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.074734   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.087639   69907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:23.101010   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.113893   69907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.134264   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.147422   69907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:23.158168   69907 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:23.158220   69907 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:23.175245   69907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:23.190456   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:23.314426   69907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:23.459513   69907 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:23.459584   69907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:23.464829   69907 start.go:563] Will wait 60s for crictl version
	I0729 11:47:23.464899   69907 ssh_runner.go:195] Run: which crictl
	I0729 11:47:23.468768   69907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:23.508694   69907 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:23.508811   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.537048   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.569189   69907 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:23.570566   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:23.573554   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.573918   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:23.573946   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.574198   69907 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:23.578543   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:23.591660   69907 kubeadm.go:883] updating cluster {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:23.591803   69907 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:23.591862   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:23.629355   69907 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:23.629423   69907 ssh_runner.go:195] Run: which lz4
	I0729 11:47:23.633713   69907 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:23.638463   69907 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:23.638491   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:22.103288   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Start
	I0729 11:47:22.103502   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring networks are active...
	I0729 11:47:22.104291   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network default is active
	I0729 11:47:22.104576   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network mk-default-k8s-diff-port-754486 is active
	I0729 11:47:22.105037   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Getting domain xml...
	I0729 11:47:22.105746   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Creating domain...
	I0729 11:47:23.370011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting to get IP...
	I0729 11:47:23.370892   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371318   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.371249   71147 retry.go:31] will retry after 303.24713ms: waiting for machine to come up
	I0729 11:47:23.675985   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676540   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.676486   71147 retry.go:31] will retry after 332.87749ms: waiting for machine to come up
	I0729 11:47:24.010822   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011360   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011388   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.011312   71147 retry.go:31] will retry after 465.260924ms: waiting for machine to come up
	I0729 11:47:24.477939   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478471   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478517   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.478431   71147 retry.go:31] will retry after 501.294487ms: waiting for machine to come up
	I0729 11:47:24.981168   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981736   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.981647   71147 retry.go:31] will retry after 522.082731ms: waiting for machine to come up
	I0729 11:47:25.165725   69907 crio.go:462] duration metric: took 1.532044107s to copy over tarball
	I0729 11:47:25.165811   69907 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:27.422770   69907 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256906507s)
	I0729 11:47:27.422807   69907 crio.go:469] duration metric: took 2.257052359s to extract the tarball
	I0729 11:47:27.422817   69907 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:27.460807   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:27.509129   69907 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:27.509157   69907 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:27.509166   69907 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.30.3 crio true true} ...
	I0729 11:47:27.509281   69907 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-731235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:27.509347   69907 ssh_runner.go:195] Run: crio config
	I0729 11:47:27.560098   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:27.560121   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:27.560133   69907 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:27.560152   69907 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-731235 NodeName:embed-certs-731235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:27.560290   69907 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-731235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:27.560345   69907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:27.570464   69907 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:27.570555   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:27.580535   69907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 11:47:27.598211   69907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:27.615318   69907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 11:47:27.632974   69907 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:27.636858   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:27.649277   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:27.763642   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:27.781529   69907 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235 for IP: 192.168.61.202
	I0729 11:47:27.781556   69907 certs.go:194] generating shared ca certs ...
	I0729 11:47:27.781577   69907 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:27.781758   69907 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:27.781812   69907 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:27.781825   69907 certs.go:256] generating profile certs ...
	I0729 11:47:27.781950   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/client.key
	I0729 11:47:27.782036   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key.6ae4b4bc
	I0729 11:47:27.782091   69907 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key
	I0729 11:47:27.782234   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:27.782278   69907 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:27.782291   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:27.782323   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:27.782358   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:27.782388   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:27.782440   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:27.783361   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:27.813522   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:27.841190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:27.877646   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:27.919310   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 11:47:27.952080   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:47:27.985958   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:28.010190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:28.034756   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:28.059541   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:28.083582   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:28.113030   69907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:28.133424   69907 ssh_runner.go:195] Run: openssl version
	I0729 11:47:28.139250   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:28.150142   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154885   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154934   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.160995   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:28.172031   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:28.184289   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189071   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189132   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.194963   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:28.205555   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:28.216328   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221023   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221091   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.227053   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:28.238044   69907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:28.242748   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:28.248989   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:28.255165   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:28.261178   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:28.266997   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:28.272966   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:28.278994   69907 kubeadm.go:392] StartCluster: {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:28.279100   69907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:28.279142   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.317620   69907 cri.go:89] found id: ""
	I0729 11:47:28.317701   69907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:28.328260   69907 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:28.328285   69907 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:28.328365   69907 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:28.338356   69907 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:28.339293   69907 kubeconfig.go:125] found "embed-certs-731235" server: "https://192.168.61.202:8443"
	I0729 11:47:28.341224   69907 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:28.351166   69907 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0729 11:47:28.351203   69907 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:28.351215   69907 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:28.351271   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.393883   69907 cri.go:89] found id: ""
	I0729 11:47:28.393986   69907 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:28.411298   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:28.421328   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:28.421362   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:28.421406   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:47:28.430665   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:28.430746   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:28.440426   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:47:28.450406   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:28.450466   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:28.460200   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.469699   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:28.469771   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.479855   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:47:28.489251   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:28.489346   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:28.499019   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:28.508770   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:28.644277   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:25.505636   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506255   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:25.506195   71147 retry.go:31] will retry after 748.410801ms: waiting for machine to come up
	I0729 11:47:26.255894   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256293   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:26.256252   71147 retry.go:31] will retry after 1.1735659s: waiting for machine to come up
	I0729 11:47:27.430990   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431494   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:27.431400   71147 retry.go:31] will retry after 1.448031075s: waiting for machine to come up
	I0729 11:47:28.880998   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881483   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:28.881413   71147 retry.go:31] will retry after 1.123855306s: waiting for machine to come up
	I0729 11:47:30.006750   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007231   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007261   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:30.007176   71147 retry.go:31] will retry after 2.180202817s: waiting for machine to come up
	I0729 11:47:30.200484   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.556171661s)
	I0729 11:47:30.200515   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.427523   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.499256   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.603274   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:30.603360   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.104293   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.603524   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.621119   69907 api_server.go:72] duration metric: took 1.01784341s to wait for apiserver process to appear ...
	I0729 11:47:31.621152   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:31.621173   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:31.621755   69907 api_server.go:269] stopped: https://192.168.61.202:8443/healthz: Get "https://192.168.61.202:8443/healthz": dial tcp 192.168.61.202:8443: connect: connection refused
	I0729 11:47:32.121931   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:32.188652   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189149   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189200   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:32.189120   71147 retry.go:31] will retry after 2.231222575s: waiting for machine to come up
	I0729 11:47:34.421672   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422102   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422130   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:34.422062   71147 retry.go:31] will retry after 2.830311758s: waiting for machine to come up
	I0729 11:47:34.187391   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.187427   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.187450   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.199953   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.199994   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.621483   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.639389   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:34.639423   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.121653   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.130808   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.130843   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.621391   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.626072   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.626116   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.122245   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.126823   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.126851   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.621364   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.625781   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.625810   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.121848   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.126505   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:37.126537   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.622175   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.628241   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:47:37.634638   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:37.634668   69907 api_server.go:131] duration metric: took 6.013509305s to wait for apiserver health ...
	I0729 11:47:37.634677   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:37.634684   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:37.636740   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:37.638144   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:37.649816   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:37.670562   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:37.680377   69907 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:37.680408   69907 system_pods.go:61] "coredns-7db6d8ff4d-kwx89" [f2a3fdcb-2794-470e-a1b4-fe264fb5613a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:37.680414   69907 system_pods.go:61] "etcd-embed-certs-731235" [a99bcf99-7242-4383-aa2d-597e817004db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:37.680421   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [302c4cda-07d4-46ec-af59-3339a2b91049] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:37.680426   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [dae9ef32-63c1-4865-9569-ea1f11c9526c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:37.680430   69907 system_pods.go:61] "kube-proxy-hw66r" [97610503-7ca0-4d0c-8d73-249f2a48ef9a] Running
	I0729 11:47:37.680436   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [144902be-bea5-493c-986d-3834c22d82d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:37.680445   69907 system_pods.go:61] "metrics-server-569cc877fc-vqgtm" [75d59d71-3fb3-4383-bd90-3362f6b40694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:37.680449   69907 system_pods.go:61] "storage-provisioner" [24f74df4-0657-481b-9af8-f8b5c94684ea] Running
	I0729 11:47:37.680454   69907 system_pods.go:74] duration metric: took 9.870611ms to wait for pod list to return data ...
	I0729 11:47:37.680460   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:37.683573   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:37.683595   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:37.683607   69907 node_conditions.go:105] duration metric: took 3.142611ms to run NodePressure ...
	I0729 11:47:37.683626   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:37.964162   69907 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968288   69907 kubeadm.go:739] kubelet initialised
	I0729 11:47:37.968308   69907 kubeadm.go:740] duration metric: took 4.123333ms waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968316   69907 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:37.972978   69907 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.977070   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977088   69907 pod_ready.go:81] duration metric: took 4.090197ms for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.977097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977102   69907 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.981499   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981535   69907 pod_ready.go:81] duration metric: took 4.424741ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.981543   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981550   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.986064   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986084   69907 pod_ready.go:81] duration metric: took 4.52445ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.986097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986103   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.254312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254680   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254757   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:37.254658   71147 retry.go:31] will retry after 3.980350875s: waiting for machine to come up
	I0729 11:47:42.625107   70480 start.go:364] duration metric: took 3m7.301517115s to acquireMachinesLock for "old-k8s-version-188043"
	I0729 11:47:42.625180   70480 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:42.625189   70480 fix.go:54] fixHost starting: 
	I0729 11:47:42.625660   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:42.625704   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:42.644136   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
	I0729 11:47:42.644661   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:42.645210   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:47:42.645242   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:42.645570   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:42.645748   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:47:42.645875   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetState
	I0729 11:47:42.647762   70480 fix.go:112] recreateIfNeeded on old-k8s-version-188043: state=Stopped err=<nil>
	I0729 11:47:42.647808   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	W0729 11:47:42.647970   70480 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:42.649883   70480 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-188043" ...
	I0729 11:47:39.992010   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:41.992091   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:43.494150   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.494177   69907 pod_ready.go:81] duration metric: took 5.508061336s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.494186   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500158   69907 pod_ready.go:92] pod "kube-proxy-hw66r" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.500186   69907 pod_ready.go:81] duration metric: took 5.992092ms for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500198   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:41.239616   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240073   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Found IP for machine: 192.168.50.111
	I0729 11:47:41.240103   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has current primary IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240110   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserving static IP address...
	I0729 11:47:41.240474   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.240501   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserved static IP address: 192.168.50.111
	I0729 11:47:41.240529   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | skip adding static IP to network mk-default-k8s-diff-port-754486 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"}
	I0729 11:47:41.240549   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Getting to WaitForSSH function...
	I0729 11:47:41.240567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for SSH to be available...
	I0729 11:47:41.242523   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.242938   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.242970   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.243112   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH client type: external
	I0729 11:47:41.243140   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa (-rw-------)
	I0729 11:47:41.243171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:41.243185   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | About to run SSH command:
	I0729 11:47:41.243198   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | exit 0
	I0729 11:47:41.366827   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:41.367268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetConfigRaw
	I0729 11:47:41.367885   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.370241   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370574   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.370605   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370867   70231 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/config.json ...
	I0729 11:47:41.371157   70231 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:41.371184   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:41.371408   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.374380   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374770   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.374805   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374920   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.375098   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375245   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375362   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.375555   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.375784   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.375801   70231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:41.479220   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:41.479262   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479528   70231 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754486"
	I0729 11:47:41.479555   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479744   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.482542   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.482869   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.482903   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.483074   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.483282   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483442   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483611   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.483828   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.484029   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.484048   70231 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754486 && echo "default-k8s-diff-port-754486" | sudo tee /etc/hostname
	I0729 11:47:41.605605   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754486
	
	I0729 11:47:41.605639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.608313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.608698   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608910   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.609126   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609498   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.609650   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.609845   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.609862   70231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754486/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:41.724183   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:41.724209   70231 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:41.724237   70231 buildroot.go:174] setting up certificates
	I0729 11:47:41.724246   70231 provision.go:84] configureAuth start
	I0729 11:47:41.724256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.724530   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.727462   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.727826   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.727858   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.728009   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.730256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.730683   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730768   70231 provision.go:143] copyHostCerts
	I0729 11:47:41.730822   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:41.730835   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:41.730904   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:41.731016   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:41.731026   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:41.731047   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:41.731151   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:41.731161   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:41.731179   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:41.731238   70231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754486 san=[127.0.0.1 192.168.50.111 default-k8s-diff-port-754486 localhost minikube]
	I0729 11:47:41.930044   70231 provision.go:177] copyRemoteCerts
	I0729 11:47:41.930097   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:41.930127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.932832   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933158   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.933186   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933378   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.933565   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.933723   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.933848   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.016885   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:42.042982   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 11:47:42.067813   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:47:42.092573   70231 provision.go:87] duration metric: took 368.315812ms to configureAuth
	I0729 11:47:42.092601   70231 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:42.092761   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:42.092829   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.095761   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096177   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.096223   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096349   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.096571   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096751   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096891   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.097056   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.097234   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.097251   70231 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:42.378448   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:42.378478   70231 machine.go:97] duration metric: took 1.007302295s to provisionDockerMachine
	I0729 11:47:42.378495   70231 start.go:293] postStartSetup for "default-k8s-diff-port-754486" (driver="kvm2")
	I0729 11:47:42.378511   70231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:42.378541   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.378917   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:42.378950   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.382127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382539   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.382567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382759   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.382958   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.383171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.383297   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.467524   70231 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:42.471793   70231 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:42.471815   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:42.471873   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:42.471948   70231 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:42.472033   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:42.482148   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:42.507312   70231 start.go:296] duration metric: took 128.801138ms for postStartSetup
	I0729 11:47:42.507358   70231 fix.go:56] duration metric: took 20.43118839s for fixHost
	I0729 11:47:42.507384   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.510309   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510737   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.510769   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510948   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.511195   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511373   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511537   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.511694   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.511844   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.511853   70231 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:42.624913   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253662.599486483
	
	I0729 11:47:42.624946   70231 fix.go:216] guest clock: 1722253662.599486483
	I0729 11:47:42.624960   70231 fix.go:229] Guest: 2024-07-29 11:47:42.599486483 +0000 UTC Remote: 2024-07-29 11:47:42.507363501 +0000 UTC m=+212.369750509 (delta=92.122982ms)
	I0729 11:47:42.624988   70231 fix.go:200] guest clock delta is within tolerance: 92.122982ms
	I0729 11:47:42.625005   70231 start.go:83] releasing machines lock for "default-k8s-diff-port-754486", held for 20.548870778s
	I0729 11:47:42.625050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.625322   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:42.628299   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.628799   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.628834   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.629011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629659   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629860   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629950   70231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:42.629997   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.630087   70231 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:42.630106   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.633122   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633432   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633464   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.633504   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633890   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.633973   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.634044   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.634088   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.634312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.634387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634489   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.634906   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.635039   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.746128   70231 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:42.754711   70231 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:42.906989   70231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:42.913975   70231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:42.914035   70231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:42.931503   70231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:42.931535   70231 start.go:495] detecting cgroup driver to use...
	I0729 11:47:42.931591   70231 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:42.949385   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:42.965940   70231 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:42.965989   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:42.982952   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:43.000214   70231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:43.123333   70231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:43.266557   70231 docker.go:233] disabling docker service ...
	I0729 11:47:43.266637   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:43.282521   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:43.300091   70231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:43.440721   70231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:43.577985   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:43.598070   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:43.620282   70231 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:43.620343   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.633918   70231 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:43.634064   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.644931   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.660559   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.676307   70231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:43.687970   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.699659   70231 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.718571   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.729820   70231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:43.739921   70231 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:43.740010   70231 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:43.755562   70231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:43.768161   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:43.899531   70231 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:44.057564   70231 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:44.057649   70231 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:44.062669   70231 start.go:563] Will wait 60s for crictl version
	I0729 11:47:44.062751   70231 ssh_runner.go:195] Run: which crictl
	I0729 11:47:44.066815   70231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:44.104368   70231 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:44.104469   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.133158   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.165813   70231 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:44.167192   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:44.170230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170633   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:44.170664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170908   70231 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:44.175609   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:44.188628   70231 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:44.188748   70231 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:44.188811   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:44.229180   70231 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:44.229255   70231 ssh_runner.go:195] Run: which lz4
	I0729 11:47:44.233985   70231 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:44.238236   70231 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:44.238276   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:42.651248   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .Start
	I0729 11:47:42.651431   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring networks are active...
	I0729 11:47:42.652248   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network default is active
	I0729 11:47:42.652632   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network mk-old-k8s-version-188043 is active
	I0729 11:47:42.653108   70480 main.go:141] libmachine: (old-k8s-version-188043) Getting domain xml...
	I0729 11:47:42.653902   70480 main.go:141] libmachine: (old-k8s-version-188043) Creating domain...
	I0729 11:47:43.961872   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting to get IP...
	I0729 11:47:43.962928   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:43.963321   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:43.963424   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:43.963311   71329 retry.go:31] will retry after 266.698669ms: waiting for machine to come up
	I0729 11:47:44.231914   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.232480   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.232507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.232428   71329 retry.go:31] will retry after 363.884342ms: waiting for machine to come up
	I0729 11:47:44.598046   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.598507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.598530   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.598465   71329 retry.go:31] will retry after 486.214401ms: waiting for machine to come up
	I0729 11:47:45.085968   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.086466   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.086511   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.086440   71329 retry.go:31] will retry after 451.181437ms: waiting for machine to come up
	I0729 11:47:44.508165   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:44.508190   69907 pod_ready.go:81] duration metric: took 1.007982605s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:44.508199   69907 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:46.515466   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:48.515797   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:45.761961   70231 crio.go:462] duration metric: took 1.528001524s to copy over tarball
	I0729 11:47:45.762103   70231 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:48.135637   70231 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.373497372s)
	I0729 11:47:48.135673   70231 crio.go:469] duration metric: took 2.373677697s to extract the tarball
	I0729 11:47:48.135683   70231 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:48.173007   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:48.222120   70231 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:48.222146   70231 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:48.222156   70231 kubeadm.go:934] updating node { 192.168.50.111 8444 v1.30.3 crio true true} ...
	I0729 11:47:48.222294   70231 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:48.222372   70231 ssh_runner.go:195] Run: crio config
	I0729 11:47:48.269094   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:48.269122   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:48.269149   70231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:48.269175   70231 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.111 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754486 NodeName:default-k8s-diff-port-754486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:48.269394   70231 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754486"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:48.269469   70231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:48.282748   70231 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:48.282830   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:48.292857   70231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 11:47:48.312165   70231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:48.332206   70231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 11:47:48.350385   70231 ssh_runner.go:195] Run: grep 192.168.50.111	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:48.354603   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:48.368166   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:48.505072   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:48.525399   70231 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486 for IP: 192.168.50.111
	I0729 11:47:48.525436   70231 certs.go:194] generating shared ca certs ...
	I0729 11:47:48.525457   70231 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:48.525622   70231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:48.525678   70231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:48.525691   70231 certs.go:256] generating profile certs ...
	I0729 11:47:48.525783   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/client.key
	I0729 11:47:48.525863   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key.0ed2faa3
	I0729 11:47:48.525927   70231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key
	I0729 11:47:48.526076   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:48.526124   70231 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:48.526138   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:48.526169   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:48.526211   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:48.526241   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:48.526289   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:48.527026   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:48.567953   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:48.605538   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:48.639615   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:48.678439   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 11:47:48.722664   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:47:48.757436   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:48.797241   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:48.825666   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:48.856344   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:48.882046   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:48.909963   70231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:48.928513   70231 ssh_runner.go:195] Run: openssl version
	I0729 11:47:48.934467   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:48.945606   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950533   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950585   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.957222   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:48.969043   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:48.981101   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986095   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986161   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.992153   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:49.004358   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:49.016204   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021070   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021131   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.027503   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:49.038545   70231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:49.043602   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:49.050327   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:49.056648   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:49.063624   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:49.071491   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:49.080125   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:49.086622   70231 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:49.086771   70231 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:49.086845   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.131483   70231 cri.go:89] found id: ""
	I0729 11:47:49.131580   70231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:49.143222   70231 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:49.143246   70231 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:49.143296   70231 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:49.155447   70231 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:49.156410   70231 kubeconfig.go:125] found "default-k8s-diff-port-754486" server: "https://192.168.50.111:8444"
	I0729 11:47:49.158477   70231 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:49.171515   70231 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.111
	I0729 11:47:49.171546   70231 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:49.171558   70231 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:49.171614   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.218584   70231 cri.go:89] found id: ""
	I0729 11:47:49.218656   70231 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:49.237934   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:49.249188   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:49.249213   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:49.249276   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:47:49.260033   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:49.260100   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:49.270588   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:47:49.280326   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:49.280422   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:49.291754   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.301918   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:49.302005   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.312861   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:47:49.323545   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:49.323614   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:49.335556   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:49.347161   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:49.467792   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:45.538886   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.539448   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.539474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.539387   71329 retry.go:31] will retry after 730.083235ms: waiting for machine to come up
	I0729 11:47:46.270923   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.271428   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.271457   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.271383   71329 retry.go:31] will retry after 699.351116ms: waiting for machine to come up
	I0729 11:47:46.973033   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.973661   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.973689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.973606   71329 retry.go:31] will retry after 804.992714ms: waiting for machine to come up
	I0729 11:47:47.780217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:47.780676   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:47.780727   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:47.780651   71329 retry.go:31] will retry after 1.242092613s: waiting for machine to come up
	I0729 11:47:49.024835   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:49.025362   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:49.025384   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:49.025314   71329 retry.go:31] will retry after 1.505477936s: waiting for machine to come up
	I0729 11:47:51.014115   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:53.015922   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:50.213363   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.427510   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.489221   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.574558   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:50.574648   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.075420   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.574892   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.612604   70231 api_server.go:72] duration metric: took 1.038045496s to wait for apiserver process to appear ...
	I0729 11:47:51.612635   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:51.612656   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:51.613131   70231 api_server.go:269] stopped: https://192.168.50.111:8444/healthz: Get "https://192.168.50.111:8444/healthz": dial tcp 192.168.50.111:8444: connect: connection refused
	I0729 11:47:52.113045   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.008828   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.008861   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.008877   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.080000   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.080047   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.113269   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.123263   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.123301   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:50.532816   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:50.533325   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:50.533354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:50.533285   71329 retry.go:31] will retry after 1.877421564s: waiting for machine to come up
	I0729 11:47:52.413018   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:52.413474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:52.413500   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:52.413405   71329 retry.go:31] will retry after 2.017909532s: waiting for machine to come up
	I0729 11:47:54.432996   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:54.433478   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:54.433506   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:54.433444   71329 retry.go:31] will retry after 2.469357423s: waiting for machine to come up
	I0729 11:47:55.612793   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.617264   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.617299   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.112811   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.119382   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:56.119410   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.612944   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.617383   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:47:56.623760   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:56.623786   70231 api_server.go:131] duration metric: took 5.011145377s to wait for apiserver health ...
	I0729 11:47:56.623795   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:56.623801   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:56.625608   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:55.018201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:57.514432   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.626901   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:56.638585   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:56.661631   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:56.671881   70231 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:56.671908   70231 system_pods.go:61] "coredns-7db6d8ff4d-d4frq" [e495bc30-3c10-4d07-b488-4dbe9b0bfb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:56.671916   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [de3378a8-9a12-4c4b-a6e6-61b19950d5a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:56.671924   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [36c2cd1b-d9de-463e-b343-661d5f14f4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:56.671934   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [6239a1ee-9f7d-4d9b-9d70-5659c7b08fbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:56.671941   70231 system_pods.go:61] "kube-proxy-4bbt5" [4e672275-1afe-4f11-80e2-62aa220e9994] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:47:56.671947   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [81b7d1ed-0163-43fb-8111-048d48efa13c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:56.671954   70231 system_pods.go:61] "metrics-server-569cc877fc-v94xq" [a34d0cd0-1049-4cb4-ae4b-d0c8d34fda13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:56.671959   70231 system_pods.go:61] "storage-provisioner" [a10d68bf-f23d-4871-9041-1e66aa089342] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:47:56.671967   70231 system_pods.go:74] duration metric: took 10.316696ms to wait for pod list to return data ...
	I0729 11:47:56.671974   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:56.677342   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:56.677368   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:56.677380   70231 node_conditions.go:105] duration metric: took 5.400925ms to run NodePressure ...
	I0729 11:47:56.677400   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:56.985230   70231 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990270   70231 kubeadm.go:739] kubelet initialised
	I0729 11:47:56.990297   70231 kubeadm.go:740] duration metric: took 5.038002ms waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990307   70231 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:56.995626   70231 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.002678   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002729   70231 pod_ready.go:81] duration metric: took 7.079039ms for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.002742   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002749   70231 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.007474   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007500   70231 pod_ready.go:81] duration metric: took 4.741617ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.007510   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007516   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.012437   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012464   70231 pod_ready.go:81] duration metric: took 4.941759ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.012474   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012480   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.065060   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065103   70231 pod_ready.go:81] duration metric: took 52.614137ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.065124   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065133   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465390   70231 pod_ready.go:92] pod "kube-proxy-4bbt5" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:57.465414   70231 pod_ready.go:81] duration metric: took 400.26956ms for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465423   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:59.475067   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.904201   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:56.904719   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:56.904754   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:56.904677   71329 retry.go:31] will retry after 4.224733299s: waiting for machine to come up
	I0729 11:48:02.473126   69419 start.go:364] duration metric: took 55.472263119s to acquireMachinesLock for "no-preload-297799"
	I0729 11:48:02.473181   69419 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:48:02.473195   69419 fix.go:54] fixHost starting: 
	I0729 11:48:02.473581   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:48:02.473611   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:48:02.491458   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0729 11:48:02.491939   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:48:02.492393   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:48:02.492411   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:48:02.492790   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:48:02.492983   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:02.493133   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:48:02.494640   69419 fix.go:112] recreateIfNeeded on no-preload-297799: state=Stopped err=<nil>
	I0729 11:48:02.494666   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	W0729 11:48:02.494878   69419 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:48:02.497014   69419 out.go:177] * Restarting existing kvm2 VM for "no-preload-297799" ...
	I0729 11:47:59.514514   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.515573   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.516078   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.132270   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132669   70480 main.go:141] libmachine: (old-k8s-version-188043) Found IP for machine: 192.168.72.61
	I0729 11:48:01.132695   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has current primary IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132702   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserving static IP address...
	I0729 11:48:01.133140   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.133173   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | skip adding static IP to network mk-old-k8s-version-188043 - found existing host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"}
	I0729 11:48:01.133189   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserved static IP address: 192.168.72.61
	I0729 11:48:01.133203   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting for SSH to be available...
	I0729 11:48:01.133217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Getting to WaitForSSH function...
	I0729 11:48:01.135427   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135759   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.135786   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135866   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH client type: external
	I0729 11:48:01.135894   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa (-rw-------)
	I0729 11:48:01.135931   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:01.135949   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | About to run SSH command:
	I0729 11:48:01.135986   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | exit 0
	I0729 11:48:01.262756   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:01.263165   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetConfigRaw
	I0729 11:48:01.263828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.266322   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266662   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.266689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266930   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:48:01.267115   70480 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:01.267132   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:01.267357   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.269973   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270331   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.270354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270502   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.270686   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270863   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.271183   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.271391   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.271405   70480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:01.383463   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:01.383492   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383771   70480 buildroot.go:166] provisioning hostname "old-k8s-version-188043"
	I0729 11:48:01.383795   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.387076   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387411   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.387449   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387583   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.387776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.387929   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.388052   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.388237   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.388396   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.388409   70480 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-188043 && echo "old-k8s-version-188043" | sudo tee /etc/hostname
	I0729 11:48:01.519219   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-188043
	
	I0729 11:48:01.519252   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.521972   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522356   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.522385   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522533   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.522755   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.522955   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.523074   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.523276   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.523452   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.523470   70480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-188043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-188043/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-188043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:01.644387   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:01.644421   70480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:01.644468   70480 buildroot.go:174] setting up certificates
	I0729 11:48:01.644480   70480 provision.go:84] configureAuth start
	I0729 11:48:01.644499   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.644781   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.647721   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648133   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.648162   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648322   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.650422   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.650857   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.650883   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.651028   70480 provision.go:143] copyHostCerts
	I0729 11:48:01.651088   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:01.651101   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:01.651160   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:01.651249   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:01.651257   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:01.651277   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:01.651329   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:01.651336   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:01.651352   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:01.651408   70480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-188043 san=[127.0.0.1 192.168.72.61 localhost minikube old-k8s-version-188043]
	I0729 11:48:01.754387   70480 provision.go:177] copyRemoteCerts
	I0729 11:48:01.754442   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:01.754468   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.757420   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.757770   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.757803   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.758031   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.758220   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.758416   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.758574   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:01.845306   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:01.872221   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 11:48:01.897935   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:01.924742   70480 provision.go:87] duration metric: took 280.248018ms to configureAuth
	I0729 11:48:01.924780   70480 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:01.924957   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:48:01.925042   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.927450   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.927780   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.927949   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.927956   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.928160   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928344   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928511   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.928677   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.928831   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.928850   70480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:02.213344   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:02.213375   70480 machine.go:97] duration metric: took 946.247614ms to provisionDockerMachine
	I0729 11:48:02.213405   70480 start.go:293] postStartSetup for "old-k8s-version-188043" (driver="kvm2")
	I0729 11:48:02.213422   70480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:02.213469   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.213811   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:02.213869   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.216897   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217219   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.217253   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217388   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.217603   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.217776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.217957   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.305894   70480 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:02.310875   70480 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:02.310907   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:02.310982   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:02.311105   70480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:02.311264   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:02.321616   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:02.349732   70480 start.go:296] duration metric: took 136.310898ms for postStartSetup
	I0729 11:48:02.349788   70480 fix.go:56] duration metric: took 19.724598498s for fixHost
	I0729 11:48:02.349828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.352855   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353194   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.353226   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353348   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.353575   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353802   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353983   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.354172   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:02.354428   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:02.354445   70480 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:02.472983   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253682.444336856
	
	I0729 11:48:02.473008   70480 fix.go:216] guest clock: 1722253682.444336856
	I0729 11:48:02.473017   70480 fix.go:229] Guest: 2024-07-29 11:48:02.444336856 +0000 UTC Remote: 2024-07-29 11:48:02.349793034 +0000 UTC m=+207.164903125 (delta=94.543822ms)
	I0729 11:48:02.473042   70480 fix.go:200] guest clock delta is within tolerance: 94.543822ms
	I0729 11:48:02.473049   70480 start.go:83] releasing machines lock for "old-k8s-version-188043", held for 19.847898477s
	I0729 11:48:02.473077   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.473414   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:02.476555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.476980   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.477010   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.477343   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.477892   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478095   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478187   70480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:02.478230   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.478285   70480 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:02.478331   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.481065   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481223   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481484   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481626   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.481665   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481705   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481845   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.481958   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.482032   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482114   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.482302   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482316   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.482466   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.569312   70480 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:02.593778   70480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:02.744117   70480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:02.752292   70480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:02.752380   70480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:02.774488   70480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:02.774515   70480 start.go:495] detecting cgroup driver to use...
	I0729 11:48:02.774581   70480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:02.793491   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:02.814235   70480 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:02.814294   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:02.832545   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:02.848433   70480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:02.980201   70480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:03.163341   70480 docker.go:233] disabling docker service ...
	I0729 11:48:03.163420   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:03.178758   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:03.197879   70480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:03.331372   70480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:03.462987   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:03.480583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:03.504239   70480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 11:48:03.504312   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.518440   70480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:03.518494   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.532072   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.543435   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.555232   70480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:03.568372   70480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:03.582423   70480 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:03.582482   70480 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:03.601891   70480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:03.614380   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:03.741008   70480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:03.896807   70480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:03.896878   70480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:03.903541   70480 start.go:563] Will wait 60s for crictl version
	I0729 11:48:03.903604   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:03.908408   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:03.952850   70480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:03.952946   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:03.984204   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:04.018650   70480 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 11:48:02.498447   69419 main.go:141] libmachine: (no-preload-297799) Calling .Start
	I0729 11:48:02.498626   69419 main.go:141] libmachine: (no-preload-297799) Ensuring networks are active...
	I0729 11:48:02.499540   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network default is active
	I0729 11:48:02.499967   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network mk-no-preload-297799 is active
	I0729 11:48:02.500446   69419 main.go:141] libmachine: (no-preload-297799) Getting domain xml...
	I0729 11:48:02.501250   69419 main.go:141] libmachine: (no-preload-297799) Creating domain...
	I0729 11:48:03.852498   69419 main.go:141] libmachine: (no-preload-297799) Waiting to get IP...
	I0729 11:48:03.853523   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:03.853951   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:03.854006   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:03.853917   71505 retry.go:31] will retry after 199.060788ms: waiting for machine to come up
	I0729 11:48:04.054348   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.054940   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.054968   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.054888   71505 retry.go:31] will retry after 285.962971ms: waiting for machine to come up
	I0729 11:48:04.342491   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.343050   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.343075   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.343003   71505 retry.go:31] will retry after 363.613745ms: waiting for machine to come up
	I0729 11:48:01.973091   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.972466   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:03.972492   70231 pod_ready.go:81] duration metric: took 6.507061375s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:03.972504   70231 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:04.020067   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:04.023182   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023542   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:04.023571   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023796   70480 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:04.028450   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:04.042324   70480 kubeadm.go:883] updating cluster {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:04.042474   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:48:04.042540   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:04.092644   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:04.092699   70480 ssh_runner.go:195] Run: which lz4
	I0729 11:48:04.096834   70480 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:48:04.101297   70480 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:48:04.101328   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 11:48:05.518740   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:08.014306   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:04.708829   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.709447   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.709480   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.709349   71505 retry.go:31] will retry after 458.384125ms: waiting for machine to come up
	I0729 11:48:05.169214   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.169896   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.169930   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.169845   71505 retry.go:31] will retry after 647.103993ms: waiting for machine to come up
	I0729 11:48:05.818415   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.819017   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.819043   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.818969   71505 retry.go:31] will retry after 857.973397ms: waiting for machine to come up
	I0729 11:48:06.678181   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:06.678732   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:06.678756   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:06.678668   71505 retry.go:31] will retry after 928.705904ms: waiting for machine to come up
	I0729 11:48:07.609326   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:07.609866   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:07.609890   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:07.609822   71505 retry.go:31] will retry after 1.262269934s: waiting for machine to come up
	I0729 11:48:08.874373   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:08.874820   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:08.874850   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:08.874758   71505 retry.go:31] will retry after 1.824043731s: waiting for machine to come up
	I0729 11:48:05.980579   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:07.982513   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:05.929023   70480 crio.go:462] duration metric: took 1.832213096s to copy over tarball
	I0729 11:48:05.929116   70480 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:48:09.096321   70480 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.167180641s)
	I0729 11:48:09.096346   70480 crio.go:469] duration metric: took 3.16729016s to extract the tarball
	I0729 11:48:09.096353   70480 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:48:09.154049   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:09.193067   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:09.193104   70480 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:09.193246   70480 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.193211   70480 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.193280   70480 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 11:48:09.193282   70480 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.193298   70480 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.193217   70480 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.193270   70480 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.193261   70480 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.194797   70480 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194815   70480 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.194846   70480 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.194885   70480 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 11:48:09.194789   70480 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.195165   70480 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.412658   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 11:48:09.423832   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.465209   70480 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 11:48:09.465256   70480 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 11:48:09.465312   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.475896   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 11:48:09.475946   70480 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 11:48:09.475983   70480 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.476028   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.511579   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.511606   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 11:48:09.547233   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 11:48:09.554773   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.566944   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.567736   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.574369   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.587705   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.650871   70480 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 11:48:09.650920   70480 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.650969   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692828   70480 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 11:48:09.692876   70480 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 11:48:09.692913   70480 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.692884   70480 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.692968   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692989   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.711985   70480 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 11:48:09.712027   70480 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.712073   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.713847   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.713883   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.713918   70480 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 11:48:09.713950   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.713959   70480 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.713997   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.718719   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.820952   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 11:48:09.820988   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 11:48:09.821051   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.821058   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 11:48:09.821142   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 11:48:09.861235   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 11:48:10.107019   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:10.014549   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.016206   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.701733   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:10.702238   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:10.702283   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:10.702199   71505 retry.go:31] will retry after 2.128592394s: waiting for machine to come up
	I0729 11:48:12.832803   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:12.833342   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:12.833364   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:12.833290   71505 retry.go:31] will retry after 2.45224359s: waiting for machine to come up
	I0729 11:48:10.479461   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.482426   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:14.978814   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.251097   70480 cache_images.go:92] duration metric: took 1.057971213s to LoadCachedImages
	W0729 11:48:10.251194   70480 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0729 11:48:10.251210   70480 kubeadm.go:934] updating node { 192.168.72.61 8443 v1.20.0 crio true true} ...
	I0729 11:48:10.251341   70480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-188043 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:10.251447   70480 ssh_runner.go:195] Run: crio config
	I0729 11:48:10.310573   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:48:10.310594   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:10.310601   70480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:10.310618   70480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.61 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-188043 NodeName:old-k8s-version-188043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 11:48:10.310786   70480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-188043"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:10.310847   70480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 11:48:10.321378   70480 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:10.321463   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:10.331285   70480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 11:48:10.348814   70480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:48:10.368593   70480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 11:48:10.387619   70480 ssh_runner.go:195] Run: grep 192.168.72.61	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:10.392096   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:10.405499   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:10.532293   70480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:10.549883   70480 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043 for IP: 192.168.72.61
	I0729 11:48:10.549910   70480 certs.go:194] generating shared ca certs ...
	I0729 11:48:10.549930   70480 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:10.550110   70480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:10.550166   70480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:10.550180   70480 certs.go:256] generating profile certs ...
	I0729 11:48:10.550299   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.key
	I0729 11:48:10.550376   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4
	I0729 11:48:10.550428   70480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key
	I0729 11:48:10.550564   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:10.550604   70480 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:10.550617   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:10.550648   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:10.550678   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:10.550730   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:10.550787   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:10.551421   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:10.588571   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:10.644840   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:10.697757   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:10.755085   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 11:48:10.800901   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:10.834650   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:10.866662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:10.895657   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:10.923565   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:10.948662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:10.976094   70480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:10.998391   70480 ssh_runner.go:195] Run: openssl version
	I0729 11:48:11.006024   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:11.019080   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024428   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024498   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.031468   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:11.043919   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:11.055933   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061208   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061284   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.067590   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:11.079323   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:11.091404   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096768   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096838   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.103571   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:11.116286   70480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:11.121569   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:11.128426   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:11.134679   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:11.141595   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:11.148605   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:11.155402   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:11.162290   70480 kubeadm.go:392] StartCluster: {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:11.162394   70480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:11.162441   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.206880   70480 cri.go:89] found id: ""
	I0729 11:48:11.206962   70480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:11.218154   70480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:11.218187   70480 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:11.218253   70480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:11.228734   70480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:11.229980   70480 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:48:11.230942   70480 kubeconfig.go:62] /home/jenkins/minikube-integration/19337-3845/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-188043" cluster setting kubeconfig missing "old-k8s-version-188043" context setting]
	I0729 11:48:11.231876   70480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:11.347256   70480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:11.358515   70480 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.61
	I0729 11:48:11.358654   70480 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:11.358668   70480 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:11.358753   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.402396   70480 cri.go:89] found id: ""
	I0729 11:48:11.402470   70480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:11.420623   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:11.431963   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:11.431989   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:11.432060   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:11.442517   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:11.442584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:11.454407   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:11.465534   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:11.465607   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:11.477716   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.489553   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:11.489625   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.501776   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:11.514863   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:11.514931   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:11.526206   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:11.536583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:11.674846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:12.763352   70480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.088463809s)
	I0729 11:48:12.763396   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.039246   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.168621   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.295910   70480 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:13.296008   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:13.797084   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.296523   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.796520   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.515092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:17.014806   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.287937   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:15.288420   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:15.288447   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:15.288378   71505 retry.go:31] will retry after 2.298011171s: waiting for machine to come up
	I0729 11:48:17.587882   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:17.588283   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:17.588317   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:17.588242   71505 retry.go:31] will retry after 3.770149633s: waiting for machine to come up
	I0729 11:48:16.979006   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:18.979673   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.296251   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:15.796738   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.296361   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.796755   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.296237   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.796188   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.296099   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.796433   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.296864   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.796931   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.514721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.515056   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.515218   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.363217   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363766   69419 main.go:141] libmachine: (no-preload-297799) Found IP for machine: 192.168.39.120
	I0729 11:48:21.363823   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has current primary IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363832   69419 main.go:141] libmachine: (no-preload-297799) Reserving static IP address...
	I0729 11:48:21.364272   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.364319   69419 main.go:141] libmachine: (no-preload-297799) DBG | skip adding static IP to network mk-no-preload-297799 - found existing host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"}
	I0729 11:48:21.364334   69419 main.go:141] libmachine: (no-preload-297799) Reserved static IP address: 192.168.39.120
	I0729 11:48:21.364351   69419 main.go:141] libmachine: (no-preload-297799) Waiting for SSH to be available...
	I0729 11:48:21.364386   69419 main.go:141] libmachine: (no-preload-297799) DBG | Getting to WaitForSSH function...
	I0729 11:48:21.366601   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.366955   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.366998   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.367110   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH client type: external
	I0729 11:48:21.367157   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa (-rw-------)
	I0729 11:48:21.367203   69419 main.go:141] libmachine: (no-preload-297799) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:21.367222   69419 main.go:141] libmachine: (no-preload-297799) DBG | About to run SSH command:
	I0729 11:48:21.367233   69419 main.go:141] libmachine: (no-preload-297799) DBG | exit 0
	I0729 11:48:21.494963   69419 main.go:141] libmachine: (no-preload-297799) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:21.495323   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetConfigRaw
	I0729 11:48:21.495901   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.498624   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499005   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.499033   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499332   69419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/config.json ...
	I0729 11:48:21.499542   69419 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:21.499561   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:21.499749   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.501857   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502237   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.502259   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502360   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.502527   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502693   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502852   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.503009   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.503209   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.503226   69419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:21.614994   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:21.615026   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615271   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:48:21.615299   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615483   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.617734   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618050   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.618082   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618192   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.618378   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618539   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618640   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.618818   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.619004   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.619019   69419 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-297799 && echo "no-preload-297799" | sudo tee /etc/hostname
	I0729 11:48:21.747538   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-297799
	
	I0729 11:48:21.747567   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.750275   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750618   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.750649   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750791   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.751003   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751179   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751302   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.751508   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.751695   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.751716   69419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-297799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-297799/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-297799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:21.877638   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:21.877665   69419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:21.877688   69419 buildroot.go:174] setting up certificates
	I0729 11:48:21.877699   69419 provision.go:84] configureAuth start
	I0729 11:48:21.877710   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.877988   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.880318   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880703   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.880730   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880918   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.883184   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883495   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.883525   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883645   69419 provision.go:143] copyHostCerts
	I0729 11:48:21.883693   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:21.883702   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:21.883757   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:21.883845   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:21.883852   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:21.883872   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:21.883925   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:21.883932   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:21.883948   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:21.883992   69419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.no-preload-297799 san=[127.0.0.1 192.168.39.120 localhost minikube no-preload-297799]
	I0729 11:48:22.283775   69419 provision.go:177] copyRemoteCerts
	I0729 11:48:22.283828   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:22.283854   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.286584   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.286954   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.286981   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.287114   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.287333   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.287503   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.287643   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.373551   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:22.401345   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 11:48:22.427243   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:22.452826   69419 provision.go:87] duration metric: took 575.112676ms to configureAuth
	I0729 11:48:22.452864   69419 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:22.453068   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:48:22.453140   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.455748   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456205   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.456237   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456444   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.456664   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456824   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456980   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.457113   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.457317   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.457340   69419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:22.736637   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:22.736667   69419 machine.go:97] duration metric: took 1.237111694s to provisionDockerMachine
	I0729 11:48:22.736682   69419 start.go:293] postStartSetup for "no-preload-297799" (driver="kvm2")
	I0729 11:48:22.736697   69419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:22.736716   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.737054   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:22.737080   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.739895   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740266   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.740299   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740437   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.740660   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.740810   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.740981   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.825483   69419 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:22.829745   69419 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:22.829765   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:22.829844   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:22.829961   69419 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:22.830063   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:22.839702   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:22.864154   69419 start.go:296] duration metric: took 127.451011ms for postStartSetup
	I0729 11:48:22.864200   69419 fix.go:56] duration metric: took 20.391004348s for fixHost
	I0729 11:48:22.864225   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.867047   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867522   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.867547   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867685   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.867897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868100   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868278   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.868442   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.868619   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.868634   69419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:22.979862   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253702.953940258
	
	I0729 11:48:22.979883   69419 fix.go:216] guest clock: 1722253702.953940258
	I0729 11:48:22.979892   69419 fix.go:229] Guest: 2024-07-29 11:48:22.953940258 +0000 UTC Remote: 2024-07-29 11:48:22.864205522 +0000 UTC m=+358.454662216 (delta=89.734736ms)
	I0729 11:48:22.979909   69419 fix.go:200] guest clock delta is within tolerance: 89.734736ms
	I0729 11:48:22.979916   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 20.506763382s
	I0729 11:48:22.979934   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.980178   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:22.983034   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983379   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.983407   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983569   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984174   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984345   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984440   69419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:22.984481   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.984593   69419 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:22.984620   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.987121   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987251   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987503   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987530   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987631   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987653   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987657   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987846   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.987853   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987984   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.988013   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988070   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988193   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.988190   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:23.101778   69419 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:23.108052   69419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:23.255523   69419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:23.261797   69419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:23.261872   69419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:23.279975   69419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:23.280003   69419 start.go:495] detecting cgroup driver to use...
	I0729 11:48:23.280070   69419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:23.296550   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:23.312947   69419 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:23.313014   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:23.327611   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:23.341549   69419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:23.465776   69419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:23.613763   69419 docker.go:233] disabling docker service ...
	I0729 11:48:23.613827   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:23.628485   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:23.641792   69419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:23.775749   69419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:23.912809   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:23.927782   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:23.947081   69419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 11:48:23.947153   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.957920   69419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:23.958002   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.968380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.979429   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.990529   69419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:24.001380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.012490   69419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.031852   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.042914   69419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:24.052901   69419 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:24.052958   69419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:24.065797   69419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:24.075298   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:24.212796   69419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:24.364082   69419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:24.364169   69419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:24.369778   69419 start.go:563] Will wait 60s for crictl version
	I0729 11:48:24.369838   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.373750   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:24.417141   69419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:24.417232   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.447170   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.491940   69419 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 11:48:21.481453   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.482213   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:20.296052   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:20.796633   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.296412   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.797025   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.296524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.796719   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.296741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.796133   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.296709   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.796699   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.515715   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:27.515900   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:24.493306   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:24.495927   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496432   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:24.496479   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496678   69419 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:24.501092   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:24.516305   69419 kubeadm.go:883] updating cluster {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:24.516452   69419 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 11:48:24.516524   69419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:24.558195   69419 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 11:48:24.558221   69419 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:24.558261   69419 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.558295   69419 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.558340   69419 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.558344   69419 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.558377   69419 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 11:48:24.558394   69419 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.558441   69419 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.558359   69419 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 11:48:24.559657   69419 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.559681   69419 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.559700   69419 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.559628   69419 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.559635   69419 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.559896   69419 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.717545   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.722347   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.724891   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.736099   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.738159   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.746232   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 11:48:24.754163   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.781677   69419 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 11:48:24.781726   69419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.781777   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.850443   69419 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 11:48:24.850478   69419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.850527   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.872953   69419 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 11:48:24.872991   69419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.873031   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908765   69419 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 11:48:24.908814   69419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.908869   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908933   69419 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 11:48:24.908969   69419 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.909008   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006764   69419 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 11:48:25.006808   69419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.006862   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006897   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:25.006908   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:25.006942   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:25.006982   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:25.007025   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:25.108737   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 11:48:25.108786   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.108843   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.109411   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109455   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109473   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 11:48:25.109491   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109530   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109543   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:25.124038   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.124154   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.161374   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161395   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 11:48:25.161411   69419 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161435   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161455   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161483   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161495   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161463   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 11:48:25.161532   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 11:48:25.430934   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983350   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (3.821838647s)
	I0729 11:48:28.983392   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 11:48:28.983487   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.822003707s)
	I0729 11:48:28.983512   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 11:48:28.983529   69419 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.552560815s)
	I0729 11:48:28.983541   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983566   69419 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 11:48:28.983600   69419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983615   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983636   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.981755   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:28.481454   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:25.296898   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.796712   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.297094   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.796384   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.296247   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.296890   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.796799   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.296947   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.015895   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:32.537283   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.876700   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.893055249s)
	I0729 11:48:30.876727   69419 ssh_runner.go:235] Completed: which crictl: (1.893072604s)
	I0729 11:48:30.876791   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:30.876737   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 11:48:30.876867   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.876921   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.925907   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 11:48:30.926007   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:32.689310   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.812361674s)
	I0729 11:48:32.689348   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 11:48:32.689380   69419 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689330   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.763306985s)
	I0729 11:48:32.689433   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689437   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 11:48:30.979444   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:33.480260   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.297007   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.797055   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.296172   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.796379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.296834   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.796689   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.296129   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.796275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.297038   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.014380   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:37.015050   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:34.662663   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.973206225s)
	I0729 11:48:34.662715   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 11:48:34.662742   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:34.662794   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:36.619459   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.956638565s)
	I0729 11:48:36.619486   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 11:48:36.619509   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:36.619565   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:38.577482   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.95789492s)
	I0729 11:48:38.577507   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 11:48:38.577529   69419 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:38.577568   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:39.229623   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 11:48:39.229672   69419 cache_images.go:123] Successfully loaded all cached images
	I0729 11:48:39.229679   69419 cache_images.go:92] duration metric: took 14.67144672s to LoadCachedImages
	I0729 11:48:39.229693   69419 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.0-beta.0 crio true true} ...
	I0729 11:48:39.229817   69419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-297799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:39.229881   69419 ssh_runner.go:195] Run: crio config
	I0729 11:48:39.275907   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:39.275926   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:39.275934   69419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:39.275954   69419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-297799 NodeName:no-preload-297799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:48:39.276122   69419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-297799"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:39.276192   69419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 11:48:39.286552   69419 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:39.286610   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:39.296058   69419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 11:48:39.318154   69419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 11:48:39.335437   69419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 11:48:39.354036   69419 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:39.358009   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:39.370253   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:35.994913   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:38.483330   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:35.297061   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.796828   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.296156   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.796245   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.297045   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.796103   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.296453   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.796146   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.296670   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.796357   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.016488   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:41.515245   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:39.512699   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:39.531458   69419 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799 for IP: 192.168.39.120
	I0729 11:48:39.531482   69419 certs.go:194] generating shared ca certs ...
	I0729 11:48:39.531502   69419 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:39.531676   69419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:39.531730   69419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:39.531743   69419 certs.go:256] generating profile certs ...
	I0729 11:48:39.531841   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/client.key
	I0729 11:48:39.531928   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key.7b715e25
	I0729 11:48:39.531975   69419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key
	I0729 11:48:39.532117   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:39.532153   69419 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:39.532167   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:39.532197   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:39.532227   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:39.532258   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:39.532304   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:39.532940   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:39.571271   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:39.596824   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:39.622112   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:39.655054   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 11:48:39.693252   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:39.717845   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:39.746725   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:39.772098   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:39.798075   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:39.824675   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:39.849863   69419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:39.867759   69419 ssh_runner.go:195] Run: openssl version
	I0729 11:48:39.874159   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:39.885596   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890166   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890229   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.896413   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:39.907803   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:39.920270   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925216   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925279   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.931316   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:39.942774   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:39.954592   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959366   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959422   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.965437   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:39.976951   69419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:39.983054   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:39.989909   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:39.995930   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:40.002178   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:40.008426   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:40.014841   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:40.021729   69419 kubeadm.go:392] StartCluster: {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:40.021848   69419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:40.021908   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.075370   69419 cri.go:89] found id: ""
	I0729 11:48:40.075473   69419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:40.086268   69419 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:40.086293   69419 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:40.086367   69419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:40.097168   69419 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:40.098369   69419 kubeconfig.go:125] found "no-preload-297799" server: "https://192.168.39.120:8443"
	I0729 11:48:40.100676   69419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:40.111832   69419 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I0729 11:48:40.111874   69419 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:40.111885   69419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:40.111927   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.151936   69419 cri.go:89] found id: ""
	I0729 11:48:40.152000   69419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:40.170773   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:40.181342   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:40.181363   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:40.181408   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:40.190984   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:40.191052   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:40.200668   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:40.209597   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:40.209645   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:40.219194   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.228788   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:40.228861   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.238965   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:40.248308   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:40.248390   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:40.257904   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:40.267645   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:40.379761   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.272628   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.487426   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.563792   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.657159   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:41.657265   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.158209   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.657442   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.712325   69419 api_server.go:72] duration metric: took 1.055172636s to wait for apiserver process to appear ...
	I0729 11:48:42.712357   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:48:42.712378   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:40.978804   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:42.979615   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:40.296481   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:40.796161   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.296479   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.796634   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.296314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.796986   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.297060   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.296048   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.796488   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.619558   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.619623   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.619639   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.629929   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.629961   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.713181   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.764383   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.764415   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:46.213129   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.217584   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.217613   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:46.713358   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.719382   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.719421   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:47.212915   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:47.218414   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:48:47.230158   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:48:47.230187   69419 api_server.go:131] duration metric: took 4.517823741s to wait for apiserver health ...
	I0729 11:48:47.230197   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:47.230203   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:47.232409   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:48:44.015604   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:46.514213   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:48.514660   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.233803   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:48:47.254784   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:48:47.278258   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:48:47.307307   69419 system_pods.go:59] 8 kube-system pods found
	I0729 11:48:47.307354   69419 system_pods.go:61] "coredns-5cfdc65f69-qz5f7" [12c37abb-1db8-4c96-8bf7-be9487c821df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:48:47.307368   69419 system_pods.go:61] "etcd-no-preload-297799" [95565d29-e8c5-4f33-84d9-a2604d25440d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:48:47.307380   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [870e0ec0-87db-4fee-b8ba-d08654d036de] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:48:47.307389   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [12bf09f7-8084-47fb-b268-c9eccf906ce8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:48:47.307397   69419 system_pods.go:61] "kube-proxy-ggh4w" [5455f099-4470-4551-864e-5e855b77f94f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:48:47.307405   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [e88dae86-cfc6-456f-b14a-ebaaeac5f758] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:48:47.307416   69419 system_pods.go:61] "metrics-server-78fcd8795b-x4t76" [874f9fbe-8ded-48ba-993d-53cbded78379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:48:47.307423   69419 system_pods.go:61] "storage-provisioner" [8ca54feb-faf5-4e75-aef5-b7c57b89c429] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:48:47.307434   69419 system_pods.go:74] duration metric: took 29.153842ms to wait for pod list to return data ...
	I0729 11:48:47.307447   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:48:47.324625   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:48:47.324677   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:48:47.324691   69419 node_conditions.go:105] duration metric: took 17.237885ms to run NodePressure ...
	I0729 11:48:47.324711   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:47.612726   69419 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619335   69419 kubeadm.go:739] kubelet initialised
	I0729 11:48:47.619356   69419 kubeadm.go:740] duration metric: took 6.608982ms waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619364   69419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:48:47.625462   69419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:45.479610   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.481743   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.978596   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:45.297079   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.796411   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.297077   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.796676   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.296378   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.796359   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.296252   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.796407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.296669   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.796307   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.516689   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:53.016717   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.632321   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.131647   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.633099   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:52.633127   69419 pod_ready.go:81] duration metric: took 5.007638065s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.633136   69419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.480576   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.979758   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:50.296349   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.796922   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.296977   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.797050   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.296223   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.796774   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.296621   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.796300   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.296480   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.796902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.515017   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:57.515244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.640065   69419 pod_ready.go:102] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.648288   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.648318   69419 pod_ready.go:81] duration metric: took 4.015175534s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.648327   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.653979   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.654012   69419 pod_ready.go:81] duration metric: took 5.676586ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.654027   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664507   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.664533   69419 pod_ready.go:81] duration metric: took 10.499453ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664544   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669414   69419 pod_ready.go:92] pod "kube-proxy-ggh4w" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.669439   69419 pod_ready.go:81] duration metric: took 4.888994ms for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669449   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673888   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.673913   69419 pod_ready.go:81] duration metric: took 4.457007ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673924   69419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:58.682501   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.982680   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:59.479587   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:55.296128   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.796141   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.296196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.796435   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.296155   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.796741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.796190   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.296902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.797062   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.013753   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:02.014435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.180620   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.183481   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.481530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.978979   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:00.296074   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.796430   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.296402   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.796722   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.296594   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.297020   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.796865   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.296072   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.796318   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.015636   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:06.514933   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.681102   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.681462   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.979240   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.979773   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.979865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.296957   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:05.796485   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.296051   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.296457   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.796342   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.796449   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.297078   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.796713   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.014934   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:11.515032   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:13.515665   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.683191   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.181155   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.182012   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.482327   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.979064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:10.296959   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:10.796677   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.296975   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.796715   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.296262   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.796937   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:13.296939   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:13.297014   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:13.340009   70480 cri.go:89] found id: ""
	I0729 11:49:13.340034   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.340041   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:13.340047   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:13.340112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:13.378003   70480 cri.go:89] found id: ""
	I0729 11:49:13.378029   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.378037   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:13.378044   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:13.378098   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:13.417130   70480 cri.go:89] found id: ""
	I0729 11:49:13.417162   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.417169   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:13.417175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:13.417253   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:13.461969   70480 cri.go:89] found id: ""
	I0729 11:49:13.462001   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.462012   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:13.462019   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:13.462089   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:13.502341   70480 cri.go:89] found id: ""
	I0729 11:49:13.502369   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.502375   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:13.502382   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:13.502434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:13.538099   70480 cri.go:89] found id: ""
	I0729 11:49:13.538123   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.538137   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:13.538143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:13.538192   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:13.575136   70480 cri.go:89] found id: ""
	I0729 11:49:13.575165   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.575172   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:13.575180   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:13.575241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:13.609066   70480 cri.go:89] found id: ""
	I0729 11:49:13.609098   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.609106   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:13.609114   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:13.609126   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:13.622813   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:13.622842   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:13.765497   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:13.765519   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:13.765532   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:13.831517   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:13.831554   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:13.877738   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:13.877771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.015086   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:18.514995   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.683827   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.180229   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.979975   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.479362   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.429475   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:16.447454   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:16.447528   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:16.486991   70480 cri.go:89] found id: ""
	I0729 11:49:16.487018   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.487028   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:16.487036   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:16.487095   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:16.526155   70480 cri.go:89] found id: ""
	I0729 11:49:16.526180   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.526187   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:16.526192   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:16.526251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:16.562054   70480 cri.go:89] found id: ""
	I0729 11:49:16.562080   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.562156   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:16.562175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:16.562229   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:16.598866   70480 cri.go:89] found id: ""
	I0729 11:49:16.598896   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.598907   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:16.598915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:16.598984   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:16.637590   70480 cri.go:89] found id: ""
	I0729 11:49:16.637615   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.637623   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:16.637628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:16.637677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:16.676712   70480 cri.go:89] found id: ""
	I0729 11:49:16.676738   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.676749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:16.676756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:16.676844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:16.717213   70480 cri.go:89] found id: ""
	I0729 11:49:16.717242   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.717250   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:16.717256   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:16.717309   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:16.755619   70480 cri.go:89] found id: ""
	I0729 11:49:16.755644   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.755652   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:16.755660   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:16.755677   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:16.825987   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:16.826023   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:16.869351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:16.869386   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.920850   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:16.920888   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:16.935884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:16.935921   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:17.021524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.521810   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:19.534761   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:19.534826   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:19.571988   70480 cri.go:89] found id: ""
	I0729 11:49:19.572019   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.572029   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:19.572037   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:19.572097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:19.607389   70480 cri.go:89] found id: ""
	I0729 11:49:19.607418   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.607427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:19.607434   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:19.607496   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:19.645810   70480 cri.go:89] found id: ""
	I0729 11:49:19.645842   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.645853   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:19.645861   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:19.645924   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:19.683628   70480 cri.go:89] found id: ""
	I0729 11:49:19.683655   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.683663   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:19.683669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:19.683715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:19.719396   70480 cri.go:89] found id: ""
	I0729 11:49:19.719424   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.719435   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:19.719442   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:19.719503   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:19.755341   70480 cri.go:89] found id: ""
	I0729 11:49:19.755372   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.755383   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:19.755390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:19.755443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:19.792562   70480 cri.go:89] found id: ""
	I0729 11:49:19.792594   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.792604   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:19.792611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:19.792674   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:19.826783   70480 cri.go:89] found id: ""
	I0729 11:49:19.826808   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.826815   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:19.826824   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:19.826835   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:19.878538   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:19.878573   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:19.893066   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:19.893094   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:19.966152   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.966177   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:19.966190   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:20.042796   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:20.042831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:20.515422   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.016350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.681192   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.681786   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.486048   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.979078   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:22.581639   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:22.595713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:22.595791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:22.639198   70480 cri.go:89] found id: ""
	I0729 11:49:22.639227   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.639239   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:22.639247   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:22.639304   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:22.681073   70480 cri.go:89] found id: ""
	I0729 11:49:22.681103   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.681117   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:22.681124   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:22.681183   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:22.717186   70480 cri.go:89] found id: ""
	I0729 11:49:22.717216   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.717226   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:22.717233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:22.717293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:22.755508   70480 cri.go:89] found id: ""
	I0729 11:49:22.755536   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.755546   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:22.755563   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:22.755626   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:22.800450   70480 cri.go:89] found id: ""
	I0729 11:49:22.800484   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.800495   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:22.800503   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:22.800567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:22.845555   70480 cri.go:89] found id: ""
	I0729 11:49:22.845581   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.845588   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:22.845594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:22.845643   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:22.895449   70480 cri.go:89] found id: ""
	I0729 11:49:22.895476   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.895483   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:22.895488   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:22.895536   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:22.943041   70480 cri.go:89] found id: ""
	I0729 11:49:22.943071   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.943081   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:22.943092   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:22.943108   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:23.002403   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:23.002448   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:23.019436   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:23.019463   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:23.090680   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:23.090718   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:23.090734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:23.173647   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:23.173687   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:25.515416   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.014796   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.181898   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.680932   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.481482   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.980230   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:25.719961   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:25.733760   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:25.733829   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:25.769965   70480 cri.go:89] found id: ""
	I0729 11:49:25.769997   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.770008   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:25.770015   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:25.770079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:25.805786   70480 cri.go:89] found id: ""
	I0729 11:49:25.805818   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.805829   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:25.805836   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:25.805899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:25.841948   70480 cri.go:89] found id: ""
	I0729 11:49:25.841978   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.841988   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:25.841996   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:25.842056   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:25.880600   70480 cri.go:89] found id: ""
	I0729 11:49:25.880626   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.880636   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:25.880644   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:25.880710   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:25.918637   70480 cri.go:89] found id: ""
	I0729 11:49:25.918671   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.918683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:25.918691   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:25.918766   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:25.955683   70480 cri.go:89] found id: ""
	I0729 11:49:25.955716   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.955726   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:25.955733   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:25.955793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:25.991796   70480 cri.go:89] found id: ""
	I0729 11:49:25.991826   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.991835   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:25.991844   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:25.991908   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:26.027595   70480 cri.go:89] found id: ""
	I0729 11:49:26.027623   70480 logs.go:276] 0 containers: []
	W0729 11:49:26.027634   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:26.027644   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:26.027658   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:26.114463   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:26.114500   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:26.156798   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:26.156834   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:26.206910   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:26.206940   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:26.221037   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:26.221065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:26.293788   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:28.794321   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:28.809573   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:28.809632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:28.860918   70480 cri.go:89] found id: ""
	I0729 11:49:28.860945   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.860952   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:28.860958   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:28.861011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:28.913038   70480 cri.go:89] found id: ""
	I0729 11:49:28.913069   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.913078   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:28.913085   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:28.913147   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:28.963679   70480 cri.go:89] found id: ""
	I0729 11:49:28.963704   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.963714   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:28.963722   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:28.963787   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:29.003935   70480 cri.go:89] found id: ""
	I0729 11:49:29.003962   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.003970   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:29.003976   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:29.004033   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:29.044990   70480 cri.go:89] found id: ""
	I0729 11:49:29.045027   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.045034   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:29.045040   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:29.045096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:29.079923   70480 cri.go:89] found id: ""
	I0729 11:49:29.079945   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.079953   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:29.079958   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:29.080004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:29.117495   70480 cri.go:89] found id: ""
	I0729 11:49:29.117520   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.117528   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:29.117534   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:29.117580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:29.155254   70480 cri.go:89] found id: ""
	I0729 11:49:29.155285   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.155295   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:29.155305   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:29.155319   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:29.207659   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:29.207698   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:29.221875   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:29.221904   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:29.295613   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:29.295637   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:29.295647   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:29.376114   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:29.376148   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:30.515987   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.015616   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:30.687554   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.180446   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.480064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.480740   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.916592   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:31.930301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:31.930373   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:31.969552   70480 cri.go:89] found id: ""
	I0729 11:49:31.969580   70480 logs.go:276] 0 containers: []
	W0729 11:49:31.969588   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:31.969594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:31.969650   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:32.003765   70480 cri.go:89] found id: ""
	I0729 11:49:32.003795   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.003804   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:32.003811   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:32.003873   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:32.038447   70480 cri.go:89] found id: ""
	I0729 11:49:32.038475   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.038486   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:32.038492   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:32.038558   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:32.077766   70480 cri.go:89] found id: ""
	I0729 11:49:32.077793   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.077805   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:32.077813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:32.077866   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:32.112599   70480 cri.go:89] found id: ""
	I0729 11:49:32.112630   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.112640   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:32.112648   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:32.112711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:32.150385   70480 cri.go:89] found id: ""
	I0729 11:49:32.150410   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.150417   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:32.150423   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:32.150481   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:32.185148   70480 cri.go:89] found id: ""
	I0729 11:49:32.185172   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.185182   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:32.185189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:32.185251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:32.219652   70480 cri.go:89] found id: ""
	I0729 11:49:32.219685   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.219696   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:32.219706   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:32.219720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:32.233440   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:32.233468   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:32.300495   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:32.300523   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:32.300540   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:32.381361   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:32.381400   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:32.422678   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:32.422730   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:34.974183   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:34.987832   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:34.987912   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:35.028216   70480 cri.go:89] found id: ""
	I0729 11:49:35.028251   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.028262   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:35.028269   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:35.028333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:35.067585   70480 cri.go:89] found id: ""
	I0729 11:49:35.067616   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.067626   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:35.067634   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:35.067698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:35.103319   70480 cri.go:89] found id: ""
	I0729 11:49:35.103346   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.103355   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:35.103362   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:35.103426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:35.138637   70480 cri.go:89] found id: ""
	I0729 11:49:35.138673   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.138714   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:35.138726   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:35.138804   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:35.176249   70480 cri.go:89] found id: ""
	I0729 11:49:35.176285   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.176293   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:35.176298   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:35.176358   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:35.218168   70480 cri.go:89] found id: ""
	I0729 11:49:35.218194   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.218202   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:35.218208   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:35.218265   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:35.515188   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.518451   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.180771   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.181078   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.979448   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:38.482849   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.253594   70480 cri.go:89] found id: ""
	I0729 11:49:35.253634   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.253641   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:35.253655   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:35.253716   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:35.289229   70480 cri.go:89] found id: ""
	I0729 11:49:35.289258   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.289269   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:35.289279   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:35.289294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:35.341152   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:35.341186   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:35.355884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:35.355925   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:35.426135   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:35.426160   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:35.426172   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:35.508387   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:35.508422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.047364   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:38.061026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:38.061088   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:38.096252   70480 cri.go:89] found id: ""
	I0729 11:49:38.096280   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.096290   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:38.096297   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:38.096365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:38.133007   70480 cri.go:89] found id: ""
	I0729 11:49:38.133040   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.133051   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:38.133058   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:38.133130   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:38.168040   70480 cri.go:89] found id: ""
	I0729 11:49:38.168063   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.168073   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:38.168086   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:38.168160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:38.205440   70480 cri.go:89] found id: ""
	I0729 11:49:38.205464   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.205471   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:38.205476   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:38.205524   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:38.243345   70480 cri.go:89] found id: ""
	I0729 11:49:38.243373   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.243383   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:38.243390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:38.243449   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:38.278506   70480 cri.go:89] found id: ""
	I0729 11:49:38.278537   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.278549   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:38.278557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:38.278616   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:38.317917   70480 cri.go:89] found id: ""
	I0729 11:49:38.317951   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.317962   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:38.317970   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:38.318032   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:38.353198   70480 cri.go:89] found id: ""
	I0729 11:49:38.353228   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.353236   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:38.353245   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:38.353259   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:38.367239   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:38.367268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:38.445964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:38.445989   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:38.446002   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:38.528232   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:38.528268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.565958   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:38.565992   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:40.014625   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:42.015244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:39.682072   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.682635   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:44.180224   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:40.979943   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:43.481875   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.123139   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:41.137233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:41.137299   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:41.179468   70480 cri.go:89] found id: ""
	I0729 11:49:41.179492   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.179502   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:41.179508   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:41.179559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:41.215586   70480 cri.go:89] found id: ""
	I0729 11:49:41.215612   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.215620   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:41.215625   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:41.215682   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:41.259473   70480 cri.go:89] found id: ""
	I0729 11:49:41.259495   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.259503   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:41.259508   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:41.259562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:41.302914   70480 cri.go:89] found id: ""
	I0729 11:49:41.302939   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.302947   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:41.302953   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:41.303012   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:41.339826   70480 cri.go:89] found id: ""
	I0729 11:49:41.339857   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.339868   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:41.339876   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:41.339944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:41.376044   70480 cri.go:89] found id: ""
	I0729 11:49:41.376067   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.376074   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:41.376080   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:41.376126   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:41.412216   70480 cri.go:89] found id: ""
	I0729 11:49:41.412241   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.412249   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:41.412255   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:41.412311   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:41.448264   70480 cri.go:89] found id: ""
	I0729 11:49:41.448294   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.448305   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:41.448315   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:41.448331   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:41.499936   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:41.499974   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:41.517126   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:41.517151   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:41.590153   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:41.590185   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:41.590201   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:41.670830   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:41.670866   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.212782   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:44.226750   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:44.226815   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:44.261492   70480 cri.go:89] found id: ""
	I0729 11:49:44.261517   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.261524   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:44.261530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:44.261577   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:44.296391   70480 cri.go:89] found id: ""
	I0729 11:49:44.296426   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.296435   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:44.296444   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:44.296510   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:44.333335   70480 cri.go:89] found id: ""
	I0729 11:49:44.333365   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.333377   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:44.333384   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:44.333447   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:44.370607   70480 cri.go:89] found id: ""
	I0729 11:49:44.370639   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.370650   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:44.370657   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:44.370734   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:44.404229   70480 cri.go:89] found id: ""
	I0729 11:49:44.404257   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.404265   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:44.404271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:44.404332   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:44.439198   70480 cri.go:89] found id: ""
	I0729 11:49:44.439227   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.439238   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:44.439244   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:44.439302   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:44.473835   70480 cri.go:89] found id: ""
	I0729 11:49:44.473887   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.473899   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:44.473908   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:44.473971   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:44.518006   70480 cri.go:89] found id: ""
	I0729 11:49:44.518031   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.518040   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:44.518050   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:44.518065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.560188   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:44.560222   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:44.609565   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:44.609602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:44.624787   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:44.624826   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:44.707388   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:44.707410   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:44.707422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:44.515480   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.013967   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:46.181170   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:48.680460   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:45.482413   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.484420   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:49.982145   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.283951   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:47.297013   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:47.297080   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:47.331979   70480 cri.go:89] found id: ""
	I0729 11:49:47.332009   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.332018   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:47.332023   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:47.332071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:47.367888   70480 cri.go:89] found id: ""
	I0729 11:49:47.367914   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.367925   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:47.367931   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:47.367991   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:47.409364   70480 cri.go:89] found id: ""
	I0729 11:49:47.409392   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.409404   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:47.409410   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:47.409462   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:47.442554   70480 cri.go:89] found id: ""
	I0729 11:49:47.442583   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.442594   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:47.442602   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:47.442656   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:47.476662   70480 cri.go:89] found id: ""
	I0729 11:49:47.476692   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.476704   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:47.476713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:47.476775   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:47.513779   70480 cri.go:89] found id: ""
	I0729 11:49:47.513809   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.513819   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:47.513827   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:47.513885   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:47.550023   70480 cri.go:89] found id: ""
	I0729 11:49:47.550047   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.550053   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:47.550059   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:47.550120   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:47.586131   70480 cri.go:89] found id: ""
	I0729 11:49:47.586157   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.586165   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:47.586174   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:47.586187   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:47.671326   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:47.671365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:47.710573   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:47.710601   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:47.763248   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:47.763284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:47.779516   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:47.779545   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:47.857474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:49.014878   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:51.515152   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.515473   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.682492   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.179515   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:52.479384   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:54.980972   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.358275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:50.371438   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:50.371501   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:50.408776   70480 cri.go:89] found id: ""
	I0729 11:49:50.408803   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.408813   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:50.408820   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:50.408881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:50.443502   70480 cri.go:89] found id: ""
	I0729 11:49:50.443528   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.443536   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:50.443541   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:50.443600   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:50.480429   70480 cri.go:89] found id: ""
	I0729 11:49:50.480454   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.480463   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:50.480470   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:50.480525   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:50.518753   70480 cri.go:89] found id: ""
	I0729 11:49:50.518779   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.518789   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:50.518796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:50.518838   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:50.555970   70480 cri.go:89] found id: ""
	I0729 11:49:50.556000   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.556010   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:50.556022   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:50.556086   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:50.592342   70480 cri.go:89] found id: ""
	I0729 11:49:50.592374   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.592385   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:50.592392   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:50.592458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:50.628772   70480 cri.go:89] found id: ""
	I0729 11:49:50.628801   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.628813   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:50.628859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:50.628919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:50.677549   70480 cri.go:89] found id: ""
	I0729 11:49:50.677579   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.677588   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:50.677598   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:50.677612   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:50.734543   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:50.734579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:50.749418   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:50.749445   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:50.825728   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:50.825754   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:50.825773   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:50.901579   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:50.901615   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.439920   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:53.453322   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:53.453381   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:53.491594   70480 cri.go:89] found id: ""
	I0729 11:49:53.491622   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.491632   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:53.491638   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:53.491698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:53.527160   70480 cri.go:89] found id: ""
	I0729 11:49:53.527188   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.527201   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:53.527207   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:53.527264   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:53.563787   70480 cri.go:89] found id: ""
	I0729 11:49:53.563819   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.563830   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:53.563838   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:53.563899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:53.601551   70480 cri.go:89] found id: ""
	I0729 11:49:53.601575   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.601583   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:53.601589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:53.601634   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:53.636711   70480 cri.go:89] found id: ""
	I0729 11:49:53.636738   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.636748   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:53.636755   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:53.636824   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:53.677809   70480 cri.go:89] found id: ""
	I0729 11:49:53.677852   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.677864   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:53.677872   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:53.677932   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:53.714548   70480 cri.go:89] found id: ""
	I0729 11:49:53.714579   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.714590   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:53.714597   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:53.714663   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:53.751406   70480 cri.go:89] found id: ""
	I0729 11:49:53.751438   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.751448   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:53.751459   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:53.751474   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:53.834905   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:53.834942   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.880818   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:53.880852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:53.935913   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:53.935948   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:53.950053   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:53.950078   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:54.027378   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.014381   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:58.513958   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:55.180502   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.181274   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.182119   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.479530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.981806   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:56.528379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:56.541859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:56.541930   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:56.580566   70480 cri.go:89] found id: ""
	I0729 11:49:56.580612   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.580621   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:56.580629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:56.580687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:56.616390   70480 cri.go:89] found id: ""
	I0729 11:49:56.616419   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.616427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:56.616433   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:56.616483   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:56.653245   70480 cri.go:89] found id: ""
	I0729 11:49:56.653273   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.653281   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:56.653286   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:56.653345   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:56.693011   70480 cri.go:89] found id: ""
	I0729 11:49:56.693033   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.693041   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:56.693047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:56.693115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:56.729684   70480 cri.go:89] found id: ""
	I0729 11:49:56.729714   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.729723   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:56.729736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:56.729799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:56.764634   70480 cri.go:89] found id: ""
	I0729 11:49:56.764675   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.764684   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:56.764692   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:56.764753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:56.800672   70480 cri.go:89] found id: ""
	I0729 11:49:56.800703   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.800714   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:56.800721   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:56.800784   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:56.838727   70480 cri.go:89] found id: ""
	I0729 11:49:56.838758   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.838769   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:56.838781   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:56.838794   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:56.918017   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.918043   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:56.918057   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:57.011900   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:57.011951   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:57.055320   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:57.055350   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:57.113681   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:57.113725   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:59.629516   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:59.643794   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:59.643872   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:59.682543   70480 cri.go:89] found id: ""
	I0729 11:49:59.682571   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.682580   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:59.682586   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:59.682649   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:59.719313   70480 cri.go:89] found id: ""
	I0729 11:49:59.719341   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.719352   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:59.719360   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:59.719415   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:59.759567   70480 cri.go:89] found id: ""
	I0729 11:49:59.759593   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.759603   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:59.759611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:59.759668   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:59.796159   70480 cri.go:89] found id: ""
	I0729 11:49:59.796180   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.796187   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:59.796192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:59.796247   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:59.836178   70480 cri.go:89] found id: ""
	I0729 11:49:59.836199   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.836207   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:59.836212   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:59.836263   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:59.876751   70480 cri.go:89] found id: ""
	I0729 11:49:59.876783   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.876795   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:59.876802   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:59.876863   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:59.916163   70480 cri.go:89] found id: ""
	I0729 11:49:59.916196   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.916207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:59.916217   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:59.916281   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:59.953581   70480 cri.go:89] found id: ""
	I0729 11:49:59.953611   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.953621   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:59.953631   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:59.953649   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:00.009128   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:00.009167   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:00.024681   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:00.024710   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:00.098939   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:00.098966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:00.098980   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:00.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:00.186125   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:01.015333   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:03.017456   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:01.682621   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.180814   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.480490   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.481157   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.726382   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:02.740727   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:02.740799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:02.779624   70480 cri.go:89] found id: ""
	I0729 11:50:02.779653   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.779664   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:02.779672   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:02.779731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:02.816044   70480 cri.go:89] found id: ""
	I0729 11:50:02.816076   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.816087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:02.816094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:02.816168   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:02.859406   70480 cri.go:89] found id: ""
	I0729 11:50:02.859434   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.859445   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:02.859453   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:02.859514   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:02.897019   70480 cri.go:89] found id: ""
	I0729 11:50:02.897049   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.897058   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:02.897064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:02.897123   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:02.936810   70480 cri.go:89] found id: ""
	I0729 11:50:02.936843   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.936854   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:02.936860   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:02.936919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:02.973391   70480 cri.go:89] found id: ""
	I0729 11:50:02.973412   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.973420   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:02.973426   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:02.973485   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:03.010867   70480 cri.go:89] found id: ""
	I0729 11:50:03.010961   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.010983   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:03.011001   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:03.011082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:03.048287   70480 cri.go:89] found id: ""
	I0729 11:50:03.048321   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.048332   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:03.048343   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:03.048360   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:03.102752   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:03.102790   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:03.117732   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:03.117759   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:03.193620   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:03.193643   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:03.193655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:03.277205   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:03.277250   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:05.513602   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:07.514141   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.181449   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:08.682052   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.980021   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:09.479308   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:05.833546   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:05.849129   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:05.849191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:05.893059   70480 cri.go:89] found id: ""
	I0729 11:50:05.893094   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.893105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:05.893113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:05.893182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:05.927844   70480 cri.go:89] found id: ""
	I0729 11:50:05.927879   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.927889   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:05.927896   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:05.927960   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:05.964427   70480 cri.go:89] found id: ""
	I0729 11:50:05.964451   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.964458   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:05.964464   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:05.964509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:06.001963   70480 cri.go:89] found id: ""
	I0729 11:50:06.001995   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.002002   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:06.002008   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:06.002055   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:06.038838   70480 cri.go:89] found id: ""
	I0729 11:50:06.038869   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.038880   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:06.038888   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:06.038948   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:06.073952   70480 cri.go:89] found id: ""
	I0729 11:50:06.073985   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.073995   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:06.074003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:06.074063   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:06.110495   70480 cri.go:89] found id: ""
	I0729 11:50:06.110524   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.110535   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:06.110541   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:06.110603   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:06.146851   70480 cri.go:89] found id: ""
	I0729 11:50:06.146893   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.146904   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:06.146915   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:06.146931   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:06.201779   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:06.201814   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:06.216407   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:06.216434   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:06.294362   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:06.294382   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:06.294394   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:06.374381   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:06.374415   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:08.920326   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:08.933875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:08.933940   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:08.974589   70480 cri.go:89] found id: ""
	I0729 11:50:08.974615   70480 logs.go:276] 0 containers: []
	W0729 11:50:08.974623   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:08.974629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:08.974691   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:09.013032   70480 cri.go:89] found id: ""
	I0729 11:50:09.013056   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.013066   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:09.013075   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:09.013121   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:09.052365   70480 cri.go:89] found id: ""
	I0729 11:50:09.052390   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.052397   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:09.052402   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:09.052450   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:09.098671   70480 cri.go:89] found id: ""
	I0729 11:50:09.098719   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.098731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:09.098739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:09.098799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:09.148033   70480 cri.go:89] found id: ""
	I0729 11:50:09.148062   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.148083   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:09.148091   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:09.148151   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:09.195140   70480 cri.go:89] found id: ""
	I0729 11:50:09.195172   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.195179   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:09.195185   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:09.195244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:09.242315   70480 cri.go:89] found id: ""
	I0729 11:50:09.242346   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.242356   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:09.242364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:09.242428   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:09.278315   70480 cri.go:89] found id: ""
	I0729 11:50:09.278342   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.278353   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:09.278364   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:09.278377   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:09.327622   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:09.327654   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:09.342383   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:09.342416   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:09.420797   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:09.420821   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:09.420832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:09.499308   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:09.499345   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:09.514809   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.515103   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.515311   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.181981   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.681128   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.480200   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.480991   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:12.042649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:12.057927   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:12.057996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:12.096129   70480 cri.go:89] found id: ""
	I0729 11:50:12.096159   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.096170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:12.096177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:12.096244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:12.133849   70480 cri.go:89] found id: ""
	I0729 11:50:12.133880   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.133891   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:12.133898   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:12.133963   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:12.171706   70480 cri.go:89] found id: ""
	I0729 11:50:12.171730   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.171738   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:12.171744   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:12.171810   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:12.211248   70480 cri.go:89] found id: ""
	I0729 11:50:12.211285   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.211307   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:12.211315   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:12.211379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:12.247472   70480 cri.go:89] found id: ""
	I0729 11:50:12.247500   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.247510   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:12.247517   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:12.247578   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:12.283818   70480 cri.go:89] found id: ""
	I0729 11:50:12.283847   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.283859   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:12.283866   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:12.283937   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:12.324453   70480 cri.go:89] found id: ""
	I0729 11:50:12.324478   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.324485   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:12.324490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:12.324541   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:12.362504   70480 cri.go:89] found id: ""
	I0729 11:50:12.362531   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.362538   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:12.362546   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:12.362558   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:12.439250   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:12.439278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:12.439295   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:12.521240   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:12.521271   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:12.561881   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:12.561918   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:12.615509   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:12.615548   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:15.130823   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:15.151388   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:15.151457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:15.209596   70480 cri.go:89] found id: ""
	I0729 11:50:15.209645   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.209658   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:15.209668   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:15.209736   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:15.515486   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:18.014350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.681466   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.686021   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.979592   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.980955   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.248351   70480 cri.go:89] found id: ""
	I0729 11:50:15.248383   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.248394   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:15.248402   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:15.248459   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:15.287261   70480 cri.go:89] found id: ""
	I0729 11:50:15.287288   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.287296   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:15.287301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:15.287356   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:15.325197   70480 cri.go:89] found id: ""
	I0729 11:50:15.325221   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.325229   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:15.325234   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:15.325292   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:15.361903   70480 cri.go:89] found id: ""
	I0729 11:50:15.361930   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.361939   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:15.361944   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:15.361994   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:15.399963   70480 cri.go:89] found id: ""
	I0729 11:50:15.399996   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.400007   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:15.400015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:15.400079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:15.437366   70480 cri.go:89] found id: ""
	I0729 11:50:15.437400   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.437409   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:15.437414   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:15.437476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:15.473784   70480 cri.go:89] found id: ""
	I0729 11:50:15.473810   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.473827   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:15.473837   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:15.473852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:15.550294   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:15.550328   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:15.550343   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:15.636252   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:15.636297   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:15.681361   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:15.681397   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:15.735415   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:15.735451   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.250767   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:18.265247   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:18.265319   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:18.302793   70480 cri.go:89] found id: ""
	I0729 11:50:18.302819   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.302827   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:18.302833   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:18.302894   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:18.345507   70480 cri.go:89] found id: ""
	I0729 11:50:18.345541   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.345551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:18.345558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:18.345621   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:18.381646   70480 cri.go:89] found id: ""
	I0729 11:50:18.381675   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.381682   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:18.381688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:18.381750   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:18.417233   70480 cri.go:89] found id: ""
	I0729 11:50:18.417261   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.417268   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:18.417275   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:18.417340   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:18.453506   70480 cri.go:89] found id: ""
	I0729 11:50:18.453534   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.453541   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:18.453547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:18.453598   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:18.491886   70480 cri.go:89] found id: ""
	I0729 11:50:18.491910   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.491918   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:18.491923   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:18.491980   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:18.528413   70480 cri.go:89] found id: ""
	I0729 11:50:18.528444   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.528454   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:18.528462   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:18.528518   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:18.565621   70480 cri.go:89] found id: ""
	I0729 11:50:18.565653   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.565663   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:18.565673   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:18.565690   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:18.616796   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:18.616832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.631175   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:18.631202   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:18.712480   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:18.712506   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:18.712520   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:18.797246   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:18.797284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:20.514492   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:23.016174   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.181252   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.682450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.480316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.980474   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:21.344260   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:21.358689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:21.358781   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:21.393248   70480 cri.go:89] found id: ""
	I0729 11:50:21.393276   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.393286   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:21.393293   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:21.393352   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:21.426960   70480 cri.go:89] found id: ""
	I0729 11:50:21.426989   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.426999   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:21.427007   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:21.427066   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:21.463521   70480 cri.go:89] found id: ""
	I0729 11:50:21.463545   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.463553   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:21.463559   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:21.463612   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:21.501915   70480 cri.go:89] found id: ""
	I0729 11:50:21.501950   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.501960   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:21.501966   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:21.502023   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:21.538208   70480 cri.go:89] found id: ""
	I0729 11:50:21.538247   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.538258   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:21.538265   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:21.538327   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:21.572572   70480 cri.go:89] found id: ""
	I0729 11:50:21.572594   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.572602   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:21.572607   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:21.572664   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:21.608002   70480 cri.go:89] found id: ""
	I0729 11:50:21.608037   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.608046   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:21.608053   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:21.608103   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:21.643039   70480 cri.go:89] found id: ""
	I0729 11:50:21.643069   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.643080   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:21.643098   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:21.643115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:21.722921   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:21.722960   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:21.768597   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:21.768628   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:21.826974   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:21.827009   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:21.842214   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:21.842242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:21.913217   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:24.413629   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:24.428364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:24.428458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:24.469631   70480 cri.go:89] found id: ""
	I0729 11:50:24.469654   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.469661   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:24.469667   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:24.469712   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:24.508201   70480 cri.go:89] found id: ""
	I0729 11:50:24.508231   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.508242   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:24.508254   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:24.508317   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:24.543992   70480 cri.go:89] found id: ""
	I0729 11:50:24.544020   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.544028   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:24.544034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:24.544082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:24.579956   70480 cri.go:89] found id: ""
	I0729 11:50:24.579983   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.579990   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:24.579995   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:24.580051   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:24.616233   70480 cri.go:89] found id: ""
	I0729 11:50:24.616259   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.616267   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:24.616273   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:24.616339   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:24.655131   70480 cri.go:89] found id: ""
	I0729 11:50:24.655159   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.655167   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:24.655173   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:24.655223   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:24.701706   70480 cri.go:89] found id: ""
	I0729 11:50:24.701730   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.701738   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:24.701743   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:24.701799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:24.739729   70480 cri.go:89] found id: ""
	I0729 11:50:24.739754   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.739762   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:24.739773   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:24.739785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:24.817347   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:24.817390   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:24.858248   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:24.858274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:24.911486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:24.911527   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:24.927180   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:24.927209   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:25.007474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:25.515125   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.515919   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:24.682503   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.180867   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:29.181299   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:25.478971   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.979128   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.507887   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:27.521936   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:27.522004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:27.555832   70480 cri.go:89] found id: ""
	I0729 11:50:27.555865   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.555875   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:27.555882   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:27.555944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:27.590482   70480 cri.go:89] found id: ""
	I0729 11:50:27.590509   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.590518   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:27.590526   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:27.590587   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:27.626998   70480 cri.go:89] found id: ""
	I0729 11:50:27.627028   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.627038   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:27.627045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:27.627105   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:27.661980   70480 cri.go:89] found id: ""
	I0729 11:50:27.662015   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.662027   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:27.662034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:27.662096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:27.696646   70480 cri.go:89] found id: ""
	I0729 11:50:27.696675   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.696683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:27.696689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:27.696735   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:27.739393   70480 cri.go:89] found id: ""
	I0729 11:50:27.739421   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.739432   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:27.739439   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:27.739500   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:27.774985   70480 cri.go:89] found id: ""
	I0729 11:50:27.775013   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.775027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:27.775034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:27.775097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:27.811526   70480 cri.go:89] found id: ""
	I0729 11:50:27.811555   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.811567   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:27.811578   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:27.811594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:27.866445   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:27.866482   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:27.881961   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:27.881991   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:27.949524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:27.949543   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:27.949555   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:28.029386   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:28.029418   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:30.014858   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.515721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:31.183830   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:33.681416   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.479786   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.484195   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:34.978772   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.571850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:30.586163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:30.586241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:30.622065   70480 cri.go:89] found id: ""
	I0729 11:50:30.622096   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.622105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:30.622113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:30.622189   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:30.659346   70480 cri.go:89] found id: ""
	I0729 11:50:30.659386   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.659398   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:30.659405   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:30.659467   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:30.699376   70480 cri.go:89] found id: ""
	I0729 11:50:30.699403   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.699413   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:30.699421   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:30.699490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:30.734843   70480 cri.go:89] found id: ""
	I0729 11:50:30.734873   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.734892   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:30.734900   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:30.734974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:30.772983   70480 cri.go:89] found id: ""
	I0729 11:50:30.773010   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.773021   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:30.773028   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:30.773084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:30.814777   70480 cri.go:89] found id: ""
	I0729 11:50:30.814805   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.814815   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:30.814823   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:30.814891   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:30.851989   70480 cri.go:89] found id: ""
	I0729 11:50:30.852018   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.852027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:30.852036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:30.852094   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:30.891692   70480 cri.go:89] found id: ""
	I0729 11:50:30.891716   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.891732   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:30.891743   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:30.891758   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:30.943466   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:30.943498   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:30.957182   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:30.957208   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:31.029695   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:31.029717   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:31.029731   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:31.113329   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:31.113378   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:33.654275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:33.668509   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:33.668581   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:33.706103   70480 cri.go:89] found id: ""
	I0729 11:50:33.706133   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.706144   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:33.706151   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:33.706203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:33.743389   70480 cri.go:89] found id: ""
	I0729 11:50:33.743417   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.743424   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:33.743431   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:33.743482   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:33.781980   70480 cri.go:89] found id: ""
	I0729 11:50:33.782014   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.782025   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:33.782032   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:33.782092   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:33.818049   70480 cri.go:89] found id: ""
	I0729 11:50:33.818080   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.818090   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:33.818098   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:33.818164   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:33.854043   70480 cri.go:89] found id: ""
	I0729 11:50:33.854069   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.854077   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:33.854083   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:33.854144   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:33.891292   70480 cri.go:89] found id: ""
	I0729 11:50:33.891319   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.891329   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:33.891338   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:33.891400   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:33.927871   70480 cri.go:89] found id: ""
	I0729 11:50:33.927904   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.927915   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:33.927922   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:33.927979   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:33.964136   70480 cri.go:89] found id: ""
	I0729 11:50:33.964163   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.964170   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:33.964181   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:33.964195   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:34.015262   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:34.015292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:34.029677   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:34.029711   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:34.105907   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:34.105932   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:34.105945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:34.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:34.186120   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:35.014404   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:37.015435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:35.681610   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:38.181485   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.979912   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:39.480001   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.740552   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:36.754472   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:36.754533   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:36.793126   70480 cri.go:89] found id: ""
	I0729 11:50:36.793162   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.793170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:36.793178   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:36.793235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:36.831095   70480 cri.go:89] found id: ""
	I0729 11:50:36.831152   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.831166   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:36.831176   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:36.831235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:36.869236   70480 cri.go:89] found id: ""
	I0729 11:50:36.869266   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.869277   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:36.869284   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:36.869343   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:36.907163   70480 cri.go:89] found id: ""
	I0729 11:50:36.907195   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.907203   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:36.907209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:36.907267   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:36.943074   70480 cri.go:89] found id: ""
	I0729 11:50:36.943101   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.943110   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:36.943115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:36.943177   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:36.980411   70480 cri.go:89] found id: ""
	I0729 11:50:36.980432   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.980442   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:36.980449   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:36.980509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:37.017994   70480 cri.go:89] found id: ""
	I0729 11:50:37.018015   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.018028   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:37.018035   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:37.018091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:37.052761   70480 cri.go:89] found id: ""
	I0729 11:50:37.052787   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.052797   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:37.052806   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:37.052818   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:37.105925   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:37.105970   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:37.119829   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:37.119862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:37.198953   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:37.198992   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:37.199013   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:37.276947   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:37.276987   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:39.816196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:39.830387   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:39.830460   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:39.864879   70480 cri.go:89] found id: ""
	I0729 11:50:39.864914   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.864921   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:39.864927   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:39.864974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:39.902730   70480 cri.go:89] found id: ""
	I0729 11:50:39.902761   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.902772   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:39.902779   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:39.902832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:39.937630   70480 cri.go:89] found id: ""
	I0729 11:50:39.937656   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.937663   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:39.937669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:39.937718   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:39.972692   70480 cri.go:89] found id: ""
	I0729 11:50:39.972723   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.972731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:39.972736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:39.972798   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:40.016152   70480 cri.go:89] found id: ""
	I0729 11:50:40.016179   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.016187   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:40.016192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:40.016239   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:40.051206   70480 cri.go:89] found id: ""
	I0729 11:50:40.051233   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.051243   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:40.051249   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:40.051310   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:40.091007   70480 cri.go:89] found id: ""
	I0729 11:50:40.091039   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.091050   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:40.091057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:40.091122   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:40.130939   70480 cri.go:89] found id: ""
	I0729 11:50:40.130968   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.130979   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:40.130992   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:40.131011   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:40.210551   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:40.210579   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:40.210594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:39.514683   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.515289   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.515935   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.681167   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:42.683536   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.978995   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.979276   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.292853   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:40.292889   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:40.333337   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:40.333365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:40.383219   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:40.383254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:42.898275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:42.913287   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:42.913365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:42.948662   70480 cri.go:89] found id: ""
	I0729 11:50:42.948696   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.948709   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:42.948716   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:42.948768   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:42.989511   70480 cri.go:89] found id: ""
	I0729 11:50:42.989541   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.989551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:42.989558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:42.989609   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:43.025987   70480 cri.go:89] found id: ""
	I0729 11:50:43.026013   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.026021   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:43.026026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:43.026082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:43.062210   70480 cri.go:89] found id: ""
	I0729 11:50:43.062243   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.062253   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:43.062271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:43.062344   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:43.104967   70480 cri.go:89] found id: ""
	I0729 11:50:43.104990   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.104997   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:43.105003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:43.105081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:43.146433   70480 cri.go:89] found id: ""
	I0729 11:50:43.146467   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.146479   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:43.146487   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:43.146551   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:43.199618   70480 cri.go:89] found id: ""
	I0729 11:50:43.199647   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.199658   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:43.199665   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:43.199721   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:43.238994   70480 cri.go:89] found id: ""
	I0729 11:50:43.239025   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.239036   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:43.239053   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:43.239071   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:43.253185   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:43.253211   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:43.325381   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:43.325399   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:43.325410   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:43.408547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:43.408582   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:43.447251   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:43.447281   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:45.516120   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.015236   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:45.181461   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:47.682648   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.478782   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.479013   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.001731   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:46.017006   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:46.017084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:46.056451   70480 cri.go:89] found id: ""
	I0729 11:50:46.056480   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.056492   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:46.056500   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:46.056562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:46.094715   70480 cri.go:89] found id: ""
	I0729 11:50:46.094754   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.094762   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:46.094767   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:46.094817   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:46.131440   70480 cri.go:89] found id: ""
	I0729 11:50:46.131471   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.131483   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:46.131490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:46.131548   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:46.171239   70480 cri.go:89] found id: ""
	I0729 11:50:46.171264   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.171271   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:46.171278   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:46.171331   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:46.208060   70480 cri.go:89] found id: ""
	I0729 11:50:46.208094   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.208102   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:46.208108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:46.208162   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:46.244765   70480 cri.go:89] found id: ""
	I0729 11:50:46.244797   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.244806   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:46.244813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:46.244874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:46.281932   70480 cri.go:89] found id: ""
	I0729 11:50:46.281965   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.281977   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:46.281986   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:46.282058   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:46.319589   70480 cri.go:89] found id: ""
	I0729 11:50:46.319618   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.319629   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:46.319640   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:46.319655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:46.369821   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:46.369859   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:46.384828   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:46.384862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:46.460755   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:46.460780   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:46.460793   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:46.543424   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:46.543459   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.089661   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:49.103781   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:49.103878   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:49.140206   70480 cri.go:89] found id: ""
	I0729 11:50:49.140234   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.140242   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:49.140248   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:49.140306   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:49.181047   70480 cri.go:89] found id: ""
	I0729 11:50:49.181077   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.181087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:49.181094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:49.181160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:49.217106   70480 cri.go:89] found id: ""
	I0729 11:50:49.217135   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.217145   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:49.217152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:49.217213   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:49.256920   70480 cri.go:89] found id: ""
	I0729 11:50:49.256955   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.256966   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:49.256973   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:49.257040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:49.294586   70480 cri.go:89] found id: ""
	I0729 11:50:49.294610   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.294618   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:49.294623   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:49.294687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:49.332441   70480 cri.go:89] found id: ""
	I0729 11:50:49.332467   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.332475   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:49.332480   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:49.332538   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:49.370255   70480 cri.go:89] found id: ""
	I0729 11:50:49.370281   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.370288   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:49.370293   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:49.370348   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:49.407106   70480 cri.go:89] found id: ""
	I0729 11:50:49.407142   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.407150   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:49.407158   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:49.407170   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:49.487262   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:49.487294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.530381   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:49.530407   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:49.585601   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:49.585640   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:49.608868   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:49.608909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:49.717369   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:50.513962   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.514789   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.181505   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.681593   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.483654   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.978973   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:54.979504   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.218194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:52.232762   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:52.232830   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:52.269399   70480 cri.go:89] found id: ""
	I0729 11:50:52.269427   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.269435   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:52.269441   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:52.269488   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:52.304375   70480 cri.go:89] found id: ""
	I0729 11:50:52.304405   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.304415   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:52.304421   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:52.304471   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:52.339374   70480 cri.go:89] found id: ""
	I0729 11:50:52.339406   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.339423   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:52.339431   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:52.339490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:52.375675   70480 cri.go:89] found id: ""
	I0729 11:50:52.375704   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.375715   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:52.375724   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:52.375785   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:52.411587   70480 cri.go:89] found id: ""
	I0729 11:50:52.411612   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.411620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:52.411625   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:52.411677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:52.445267   70480 cri.go:89] found id: ""
	I0729 11:50:52.445291   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.445301   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:52.445308   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:52.445367   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:52.491320   70480 cri.go:89] found id: ""
	I0729 11:50:52.491352   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.491361   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:52.491376   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:52.491432   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:52.528158   70480 cri.go:89] found id: ""
	I0729 11:50:52.528195   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.528205   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:52.528214   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:52.528229   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:52.584122   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:52.584156   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:52.598572   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:52.598611   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:52.675433   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:52.675451   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:52.675473   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:52.759393   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:52.759433   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:55.014201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.015293   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.181456   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.680557   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:56.980460   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:58.982179   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.300231   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:55.316902   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:55.316972   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:55.362311   70480 cri.go:89] found id: ""
	I0729 11:50:55.362350   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.362360   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:55.362368   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:55.362434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:55.420481   70480 cri.go:89] found id: ""
	I0729 11:50:55.420506   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.420519   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:55.420524   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:55.420582   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:55.472515   70480 cri.go:89] found id: ""
	I0729 11:50:55.472546   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.472556   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:55.472565   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:55.472625   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:55.513203   70480 cri.go:89] found id: ""
	I0729 11:50:55.513224   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.513232   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:55.513237   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:55.513290   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:55.548410   70480 cri.go:89] found id: ""
	I0729 11:50:55.548440   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.548450   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:55.548457   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:55.548517   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:55.584532   70480 cri.go:89] found id: ""
	I0729 11:50:55.584561   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.584571   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:55.584577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:55.584640   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:55.627593   70480 cri.go:89] found id: ""
	I0729 11:50:55.627623   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.627652   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:55.627660   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:55.627723   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:55.667981   70480 cri.go:89] found id: ""
	I0729 11:50:55.668005   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.668014   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:55.668021   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:55.668050   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:55.721569   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:55.721605   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:55.735570   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:55.735598   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:55.810549   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:55.810578   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:55.810590   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:55.892547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:55.892594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:58.435946   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:58.449628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:58.449693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:58.487449   70480 cri.go:89] found id: ""
	I0729 11:50:58.487481   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.487499   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:58.487507   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:58.487574   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:58.524030   70480 cri.go:89] found id: ""
	I0729 11:50:58.524051   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.524058   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:58.524063   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:58.524118   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:58.560348   70480 cri.go:89] found id: ""
	I0729 11:50:58.560374   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.560381   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:58.560386   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:58.560434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:58.598947   70480 cri.go:89] found id: ""
	I0729 11:50:58.598974   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.598984   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:58.598992   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:58.599050   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:58.634763   70480 cri.go:89] found id: ""
	I0729 11:50:58.634789   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.634799   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:58.634807   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:58.634867   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:58.671609   70480 cri.go:89] found id: ""
	I0729 11:50:58.671639   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.671649   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:58.671656   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:58.671715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:58.712629   70480 cri.go:89] found id: ""
	I0729 11:50:58.712654   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.712661   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:58.712669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:58.712719   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:58.749749   70480 cri.go:89] found id: ""
	I0729 11:50:58.749779   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.749788   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:58.749799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:58.749813   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:58.807124   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:58.807159   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:58.821486   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:58.821513   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:58.889226   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:58.889248   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:58.889263   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:58.968593   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:58.968633   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:59.515675   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.015006   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:59.681443   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.181409   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:04.183067   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.482470   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:03.482794   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.511112   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:01.525124   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:01.525221   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:01.559415   70480 cri.go:89] found id: ""
	I0729 11:51:01.559450   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.559462   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:01.559469   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:01.559530   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:01.593614   70480 cri.go:89] found id: ""
	I0729 11:51:01.593644   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.593655   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:01.593661   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:01.593722   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:01.632365   70480 cri.go:89] found id: ""
	I0729 11:51:01.632398   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.632409   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:01.632416   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:01.632476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:01.670518   70480 cri.go:89] found id: ""
	I0729 11:51:01.670543   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.670550   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:01.670557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:01.670618   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:01.709732   70480 cri.go:89] found id: ""
	I0729 11:51:01.709755   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.709762   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:01.709768   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:01.709813   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:01.751710   70480 cri.go:89] found id: ""
	I0729 11:51:01.751739   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.751749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:01.751756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:01.751818   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:01.795819   70480 cri.go:89] found id: ""
	I0729 11:51:01.795848   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.795859   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:01.795867   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:01.795931   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:01.838654   70480 cri.go:89] found id: ""
	I0729 11:51:01.838684   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.838691   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:01.838719   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:01.838734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:01.894328   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:01.894370   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:01.908870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:01.908894   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:01.982740   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:01.982770   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:01.982785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:02.068332   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:02.068376   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.614758   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:04.629646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:04.629708   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:04.666502   70480 cri.go:89] found id: ""
	I0729 11:51:04.666526   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.666534   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:04.666540   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:04.666590   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:04.705373   70480 cri.go:89] found id: ""
	I0729 11:51:04.705398   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.705407   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:04.705414   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:04.705468   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:04.745076   70480 cri.go:89] found id: ""
	I0729 11:51:04.745110   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.745122   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:04.745130   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:04.745195   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:04.786923   70480 cri.go:89] found id: ""
	I0729 11:51:04.786953   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.786963   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:04.786971   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:04.787031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:04.824483   70480 cri.go:89] found id: ""
	I0729 11:51:04.824514   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.824522   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:04.824530   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:04.824591   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:04.861347   70480 cri.go:89] found id: ""
	I0729 11:51:04.861379   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.861390   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:04.861399   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:04.861456   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:04.902102   70480 cri.go:89] found id: ""
	I0729 11:51:04.902136   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.902143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:04.902149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:04.902206   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:04.942533   70480 cri.go:89] found id: ""
	I0729 11:51:04.942560   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.942568   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:04.942576   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:04.942589   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:04.997272   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:04.997309   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:05.012276   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:05.012307   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:05.088408   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:05.088434   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:05.088462   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:05.173414   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:05.173452   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.514092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.016150   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:06.680804   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:08.681656   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:05.978846   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.979974   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.718649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:07.732209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:07.732282   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:07.767128   70480 cri.go:89] found id: ""
	I0729 11:51:07.767157   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.767164   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:07.767170   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:07.767233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:07.803210   70480 cri.go:89] found id: ""
	I0729 11:51:07.803243   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.803253   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:07.803260   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:07.803323   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:07.839694   70480 cri.go:89] found id: ""
	I0729 11:51:07.839718   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.839726   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:07.839732   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:07.839779   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:07.876660   70480 cri.go:89] found id: ""
	I0729 11:51:07.876687   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.876695   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:07.876701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:07.876758   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:07.923089   70480 cri.go:89] found id: ""
	I0729 11:51:07.923119   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.923128   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:07.923139   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:07.923191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:07.961178   70480 cri.go:89] found id: ""
	I0729 11:51:07.961205   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.961214   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:07.961223   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:07.961283   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:08.001007   70480 cri.go:89] found id: ""
	I0729 11:51:08.001031   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.001038   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:08.001047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:08.001115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:08.036930   70480 cri.go:89] found id: ""
	I0729 11:51:08.036956   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.036964   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:08.036972   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:08.036982   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:08.091405   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:08.091440   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:08.106456   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:08.106483   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:08.181814   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:08.181835   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:08.181846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:08.267663   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:08.267701   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:09.514482   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.514970   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.182959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:13.680925   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.481614   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:12.482016   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:14.980848   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.814602   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:10.828290   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:10.828351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:10.866374   70480 cri.go:89] found id: ""
	I0729 11:51:10.866398   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.866406   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:10.866412   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:10.866457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:10.908253   70480 cri.go:89] found id: ""
	I0729 11:51:10.908286   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.908295   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:10.908301   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:10.908370   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:10.952686   70480 cri.go:89] found id: ""
	I0729 11:51:10.952709   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.952717   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:10.952723   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:10.952771   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:10.991621   70480 cri.go:89] found id: ""
	I0729 11:51:10.991650   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.991661   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:10.991668   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:10.991728   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:11.028420   70480 cri.go:89] found id: ""
	I0729 11:51:11.028451   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.028462   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:11.028469   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:11.028520   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:11.065213   70480 cri.go:89] found id: ""
	I0729 11:51:11.065248   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.065259   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:11.065266   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:11.065328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:11.104008   70480 cri.go:89] found id: ""
	I0729 11:51:11.104051   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.104064   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:11.104073   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:11.104134   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:11.140872   70480 cri.go:89] found id: ""
	I0729 11:51:11.140913   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.140925   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:11.140936   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:11.140958   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:11.222498   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:11.222535   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:11.265869   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:11.265909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:11.319889   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:11.319926   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:11.334069   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:11.334100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:11.412461   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:13.913194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:13.927057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:13.927141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:13.963267   70480 cri.go:89] found id: ""
	I0729 11:51:13.963302   70480 logs.go:276] 0 containers: []
	W0729 11:51:13.963312   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:13.963321   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:13.963386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:14.002591   70480 cri.go:89] found id: ""
	I0729 11:51:14.002621   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.002633   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:14.002640   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:14.002737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:14.041377   70480 cri.go:89] found id: ""
	I0729 11:51:14.041410   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.041422   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:14.041437   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:14.041502   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:14.081794   70480 cri.go:89] found id: ""
	I0729 11:51:14.081821   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.081829   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:14.081835   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:14.081888   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:14.121223   70480 cri.go:89] found id: ""
	I0729 11:51:14.121251   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.121261   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:14.121269   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:14.121333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:14.156762   70480 cri.go:89] found id: ""
	I0729 11:51:14.156798   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.156808   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:14.156817   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:14.156881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:14.195068   70480 cri.go:89] found id: ""
	I0729 11:51:14.195098   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.195108   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:14.195115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:14.195185   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:14.233361   70480 cri.go:89] found id: ""
	I0729 11:51:14.233392   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.233402   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:14.233413   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:14.233428   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:14.289276   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:14.289318   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:14.304505   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:14.304536   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:14.376648   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:14.376673   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:14.376685   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:14.453538   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:14.453574   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:14.016205   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:16.514374   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.514902   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:15.681382   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.181597   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.479865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:19.480304   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.004150   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:17.018324   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:17.018401   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:17.056923   70480 cri.go:89] found id: ""
	I0729 11:51:17.056959   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.056970   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:17.056978   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:17.057042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:17.093329   70480 cri.go:89] found id: ""
	I0729 11:51:17.093362   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.093374   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:17.093381   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:17.093443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:17.130340   70480 cri.go:89] found id: ""
	I0729 11:51:17.130372   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.130382   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:17.130391   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:17.130457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:17.164877   70480 cri.go:89] found id: ""
	I0729 11:51:17.164902   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.164910   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:17.164915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:17.164962   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:17.201509   70480 cri.go:89] found id: ""
	I0729 11:51:17.201538   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.201549   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:17.201555   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:17.201629   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:17.242097   70480 cri.go:89] found id: ""
	I0729 11:51:17.242121   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.242130   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:17.242136   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:17.242182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:17.277239   70480 cri.go:89] found id: ""
	I0729 11:51:17.277262   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.277270   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:17.277279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:17.277328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:17.310991   70480 cri.go:89] found id: ""
	I0729 11:51:17.311022   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.311034   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:17.311046   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:17.311061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:17.386672   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:17.386718   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:17.424880   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:17.424905   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:17.477226   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:17.477255   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:17.492327   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:17.492354   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:17.570971   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.071148   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:20.085734   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:20.085821   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:20.120455   70480 cri.go:89] found id: ""
	I0729 11:51:20.120500   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.120512   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:20.120530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:20.120589   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:20.165159   70480 cri.go:89] found id: ""
	I0729 11:51:20.165186   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.165193   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:20.165199   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:20.165245   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:20.202147   70480 cri.go:89] found id: ""
	I0729 11:51:20.202168   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.202175   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:20.202182   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:20.202237   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:20.515560   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.014288   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.681542   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.181158   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:21.978106   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.979809   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.237783   70480 cri.go:89] found id: ""
	I0729 11:51:20.237811   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.237822   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:20.237829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:20.237890   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:20.280812   70480 cri.go:89] found id: ""
	I0729 11:51:20.280839   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.280852   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:20.280858   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:20.280922   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:20.317904   70480 cri.go:89] found id: ""
	I0729 11:51:20.317925   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.317932   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:20.317938   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:20.317986   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:20.356106   70480 cri.go:89] found id: ""
	I0729 11:51:20.356136   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.356143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:20.356149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:20.356197   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:20.393483   70480 cri.go:89] found id: ""
	I0729 11:51:20.393514   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.393526   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:20.393537   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:20.393552   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:20.446650   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:20.446716   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:20.460502   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:20.460531   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:20.535717   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.535738   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:20.535751   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:20.619068   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:20.619119   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.159775   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:23.174688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:23.174776   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:23.210990   70480 cri.go:89] found id: ""
	I0729 11:51:23.211017   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.211025   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:23.211031   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:23.211083   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:23.251468   70480 cri.go:89] found id: ""
	I0729 11:51:23.251494   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.251505   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:23.251512   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:23.251567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:23.288902   70480 cri.go:89] found id: ""
	I0729 11:51:23.288950   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.288961   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:23.288969   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:23.289028   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:23.324544   70480 cri.go:89] found id: ""
	I0729 11:51:23.324583   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.324593   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:23.324604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:23.324681   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:23.363138   70480 cri.go:89] found id: ""
	I0729 11:51:23.363170   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.363180   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:23.363188   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:23.363246   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:23.400107   70480 cri.go:89] found id: ""
	I0729 11:51:23.400136   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.400146   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:23.400163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:23.400224   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:23.437129   70480 cri.go:89] found id: ""
	I0729 11:51:23.437169   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.437180   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:23.437189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:23.437251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:23.470782   70480 cri.go:89] found id: ""
	I0729 11:51:23.470811   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.470821   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:23.470831   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:23.470846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:23.557806   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:23.557843   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.601312   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:23.601342   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:23.658042   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:23.658084   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:23.682844   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:23.682878   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:23.773049   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:25.015099   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.518243   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:25.680468   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.680741   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.479529   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:28.978442   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.273263   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:26.286759   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:26.286822   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:26.325303   70480 cri.go:89] found id: ""
	I0729 11:51:26.325330   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.325340   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:26.325347   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:26.325402   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:26.366994   70480 cri.go:89] found id: ""
	I0729 11:51:26.367019   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.367033   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:26.367040   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:26.367100   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:26.407748   70480 cri.go:89] found id: ""
	I0729 11:51:26.407779   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.407789   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:26.407796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:26.407856   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:26.443172   70480 cri.go:89] found id: ""
	I0729 11:51:26.443197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.443206   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:26.443214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:26.443275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:26.479906   70480 cri.go:89] found id: ""
	I0729 11:51:26.479928   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.479937   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:26.479945   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:26.480011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:26.518837   70480 cri.go:89] found id: ""
	I0729 11:51:26.518867   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.518877   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:26.518884   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:26.518939   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:26.557168   70480 cri.go:89] found id: ""
	I0729 11:51:26.557197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.557207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:26.557214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:26.557271   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:26.591670   70480 cri.go:89] found id: ""
	I0729 11:51:26.591699   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.591707   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:26.591715   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:26.591727   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:26.606611   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:26.606641   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:26.675726   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:26.675752   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:26.675768   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:26.755738   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:26.755776   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:26.799482   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:26.799518   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:29.353415   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:29.367062   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:29.367141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:29.404628   70480 cri.go:89] found id: ""
	I0729 11:51:29.404669   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.404677   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:29.404683   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:29.404731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:29.452828   70480 cri.go:89] found id: ""
	I0729 11:51:29.452858   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.452868   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:29.452877   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:29.452936   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:29.488252   70480 cri.go:89] found id: ""
	I0729 11:51:29.488280   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.488288   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:29.488296   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:29.488357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:29.524837   70480 cri.go:89] found id: ""
	I0729 11:51:29.524863   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.524874   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:29.524890   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:29.524938   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:29.564545   70480 cri.go:89] found id: ""
	I0729 11:51:29.564587   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.564598   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:29.564615   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:29.564686   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:29.604254   70480 cri.go:89] found id: ""
	I0729 11:51:29.604282   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.604292   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:29.604299   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:29.604365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:29.640107   70480 cri.go:89] found id: ""
	I0729 11:51:29.640137   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.640147   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:29.640152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:29.640207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:29.677070   70480 cri.go:89] found id: ""
	I0729 11:51:29.677099   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.677109   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:29.677119   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:29.677143   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:29.692434   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:29.692466   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:29.769317   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:29.769344   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:29.769355   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:29.850468   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:29.850505   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:29.890975   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:29.891010   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:30.014896   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.014991   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:29.682442   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.181766   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:34.182032   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:30.979636   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:33.480377   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.445320   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:32.459458   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:32.459534   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:32.500629   70480 cri.go:89] found id: ""
	I0729 11:51:32.500652   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.500659   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:32.500664   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:32.500711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:32.538737   70480 cri.go:89] found id: ""
	I0729 11:51:32.538763   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.538771   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:32.538777   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:32.538837   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:32.578093   70480 cri.go:89] found id: ""
	I0729 11:51:32.578124   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.578134   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:32.578141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:32.578200   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:32.615501   70480 cri.go:89] found id: ""
	I0729 11:51:32.615527   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.615539   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:32.615547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:32.615597   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:32.653232   70480 cri.go:89] found id: ""
	I0729 11:51:32.653262   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.653272   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:32.653279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:32.653338   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:32.691385   70480 cri.go:89] found id: ""
	I0729 11:51:32.691408   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.691418   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:32.691440   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:32.691486   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:32.727625   70480 cri.go:89] found id: ""
	I0729 11:51:32.727657   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.727667   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:32.727674   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:32.727737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:32.762200   70480 cri.go:89] found id: ""
	I0729 11:51:32.762225   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.762232   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:32.762240   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:32.762251   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:32.815020   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:32.815061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:32.830072   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:32.830100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:32.902248   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:32.902278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:32.902292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:32.984571   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:32.984604   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:34.513960   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.514684   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.515512   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.680403   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.681176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.979834   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.482035   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.528850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:35.543568   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:35.543633   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:35.583537   70480 cri.go:89] found id: ""
	I0729 11:51:35.583564   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.583571   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:35.583578   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:35.583632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:35.621997   70480 cri.go:89] found id: ""
	I0729 11:51:35.622021   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.622028   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:35.622034   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:35.622090   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:35.659062   70480 cri.go:89] found id: ""
	I0729 11:51:35.659091   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.659102   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:35.659109   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:35.659169   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:35.696644   70480 cri.go:89] found id: ""
	I0729 11:51:35.696679   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.696689   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:35.696701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:35.696753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:35.734321   70480 cri.go:89] found id: ""
	I0729 11:51:35.734348   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.734358   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:35.734366   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:35.734426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:35.771540   70480 cri.go:89] found id: ""
	I0729 11:51:35.771574   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.771586   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:35.771604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:35.771684   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:35.811293   70480 cri.go:89] found id: ""
	I0729 11:51:35.811318   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.811326   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:35.811332   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:35.811386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:35.857175   70480 cri.go:89] found id: ""
	I0729 11:51:35.857206   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.857217   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:35.857228   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:35.857242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:35.946191   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:35.946210   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:35.946225   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:36.033466   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:36.033501   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:36.072593   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:36.072622   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:36.129193   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:36.129228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:38.645464   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:38.659318   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:38.659378   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:38.697259   70480 cri.go:89] found id: ""
	I0729 11:51:38.697289   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.697298   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:38.697304   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:38.697351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:38.736722   70480 cri.go:89] found id: ""
	I0729 11:51:38.736751   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.736763   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:38.736770   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:38.736828   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:38.773508   70480 cri.go:89] found id: ""
	I0729 11:51:38.773532   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.773539   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:38.773545   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:38.773607   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:38.813147   70480 cri.go:89] found id: ""
	I0729 11:51:38.813177   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.813186   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:38.813193   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:38.813249   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:38.850588   70480 cri.go:89] found id: ""
	I0729 11:51:38.850621   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.850631   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:38.850639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:38.850694   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:38.888260   70480 cri.go:89] found id: ""
	I0729 11:51:38.888293   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.888304   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:38.888313   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:38.888380   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:38.924326   70480 cri.go:89] found id: ""
	I0729 11:51:38.924352   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.924360   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:38.924365   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:38.924426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:38.963356   70480 cri.go:89] found id: ""
	I0729 11:51:38.963386   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.963397   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:38.963408   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:38.963425   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:39.048438   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:39.048472   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:39.087799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:39.087828   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:39.141908   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:39.141945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:39.156242   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:39.156282   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:39.231689   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:41.014799   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.015914   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.180241   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.180737   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:40.980126   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.480593   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.732860   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:41.752371   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:41.752451   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:41.804525   70480 cri.go:89] found id: ""
	I0729 11:51:41.804553   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.804565   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:41.804575   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:41.804632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:41.859983   70480 cri.go:89] found id: ""
	I0729 11:51:41.860010   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.860018   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:41.860024   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:41.860081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:41.909592   70480 cri.go:89] found id: ""
	I0729 11:51:41.909622   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.909632   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:41.909639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:41.909700   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:41.949892   70480 cri.go:89] found id: ""
	I0729 11:51:41.949919   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.949928   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:41.949933   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:41.950011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:41.985334   70480 cri.go:89] found id: ""
	I0729 11:51:41.985360   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.985368   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:41.985374   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:41.985426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:42.028747   70480 cri.go:89] found id: ""
	I0729 11:51:42.028806   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.028818   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:42.028829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:42.028899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:42.063927   70480 cri.go:89] found id: ""
	I0729 11:51:42.063955   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.063965   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:42.063972   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:42.064031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:42.099539   70480 cri.go:89] found id: ""
	I0729 11:51:42.099566   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.099577   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:42.099587   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:42.099602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:42.112852   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:42.112879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:42.185104   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:42.185130   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:42.185142   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:42.265744   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:42.265778   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:42.309451   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:42.309478   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.862271   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:44.876763   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:44.876832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:44.921157   70480 cri.go:89] found id: ""
	I0729 11:51:44.921188   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.921198   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:44.921206   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:44.921266   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:44.960222   70480 cri.go:89] found id: ""
	I0729 11:51:44.960255   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.960265   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:44.960272   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:44.960334   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:44.999994   70480 cri.go:89] found id: ""
	I0729 11:51:45.000025   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.000036   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:45.000045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:45.000108   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:45.037110   70480 cri.go:89] found id: ""
	I0729 11:51:45.037145   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.037156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:45.037163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:45.037215   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:45.074406   70480 cri.go:89] found id: ""
	I0729 11:51:45.074430   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.074440   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:45.074447   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:45.074508   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:45.113069   70480 cri.go:89] found id: ""
	I0729 11:51:45.113097   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.113107   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:45.113115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:45.113178   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:45.158016   70480 cri.go:89] found id: ""
	I0729 11:51:45.158045   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.158055   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:45.158063   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:45.158115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:45.199257   70480 cri.go:89] found id: ""
	I0729 11:51:45.199286   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.199294   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:45.199303   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:45.199314   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.509117   69907 pod_ready.go:81] duration metric: took 4m0.000903528s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	E0729 11:51:44.509148   69907 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:51:44.509164   69907 pod_ready.go:38] duration metric: took 4m6.540840543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:51:44.509191   69907 kubeadm.go:597] duration metric: took 4m16.180899614s to restartPrimaryControlPlane
	W0729 11:51:44.509250   69907 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:51:44.509278   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:51:45.181697   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.682106   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.979275   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.979316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.254060   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:45.254100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:45.269592   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:45.269630   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:45.357469   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:45.357493   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:45.357508   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:45.437760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:45.437806   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:47.980407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:47.998789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:47.998874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:48.039278   70480 cri.go:89] found id: ""
	I0729 11:51:48.039311   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.039321   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:48.039328   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:48.039391   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:48.080277   70480 cri.go:89] found id: ""
	I0729 11:51:48.080312   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.080324   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:48.080332   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:48.080395   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:48.119009   70480 cri.go:89] found id: ""
	I0729 11:51:48.119032   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.119039   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:48.119045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:48.119091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:48.162062   70480 cri.go:89] found id: ""
	I0729 11:51:48.162091   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.162101   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:48.162108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:48.162175   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:48.208089   70480 cri.go:89] found id: ""
	I0729 11:51:48.208120   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.208141   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:48.208148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:48.208214   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:48.254174   70480 cri.go:89] found id: ""
	I0729 11:51:48.254206   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.254217   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:48.254225   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:48.254288   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:48.294959   70480 cri.go:89] found id: ""
	I0729 11:51:48.294988   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.294998   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:48.295005   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:48.295067   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:48.337176   70480 cri.go:89] found id: ""
	I0729 11:51:48.337209   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.337221   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:48.337231   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:48.337249   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:48.392826   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:48.392879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:48.409017   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:48.409043   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:48.489964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:48.489995   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:48.490012   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:48.571448   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:48.571496   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:50.180914   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.181136   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:50.479880   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.977753   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:54.978456   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:51.124524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:51.137808   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:51.137887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:51.178622   70480 cri.go:89] found id: ""
	I0729 11:51:51.178647   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.178656   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:51.178663   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:51.178738   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:51.214985   70480 cri.go:89] found id: ""
	I0729 11:51:51.215008   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.215015   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:51.215021   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:51.215071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:51.250529   70480 cri.go:89] found id: ""
	I0729 11:51:51.250575   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.250586   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:51.250594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:51.250648   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:51.284745   70480 cri.go:89] found id: ""
	I0729 11:51:51.284774   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.284781   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:51.284787   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:51.284844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:51.319448   70480 cri.go:89] found id: ""
	I0729 11:51:51.319476   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.319486   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:51.319494   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:51.319559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:51.358828   70480 cri.go:89] found id: ""
	I0729 11:51:51.358861   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.358868   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:51.358875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:51.358934   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:51.398326   70480 cri.go:89] found id: ""
	I0729 11:51:51.398356   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.398363   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:51.398369   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:51.398424   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:51.450495   70480 cri.go:89] found id: ""
	I0729 11:51:51.450523   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.450530   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:51.450539   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:51.450549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:51.495351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:51.495381   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:51.545937   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:51.545972   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:51.560738   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:51.560769   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:51.640187   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:51.640209   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:51.640223   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.230801   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:54.244231   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:54.244307   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:54.279319   70480 cri.go:89] found id: ""
	I0729 11:51:54.279349   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.279359   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:54.279366   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:54.279426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:54.316648   70480 cri.go:89] found id: ""
	I0729 11:51:54.316675   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.316685   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:54.316691   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:54.316751   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:54.353605   70480 cri.go:89] found id: ""
	I0729 11:51:54.353631   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.353641   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:54.353646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:54.353705   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:54.389695   70480 cri.go:89] found id: ""
	I0729 11:51:54.389724   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.389734   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:54.389739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:54.389789   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:54.425572   70480 cri.go:89] found id: ""
	I0729 11:51:54.425610   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.425620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:54.425627   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:54.425693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:54.466041   70480 cri.go:89] found id: ""
	I0729 11:51:54.466084   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.466101   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:54.466111   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:54.466160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:54.504916   70480 cri.go:89] found id: ""
	I0729 11:51:54.504943   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.504950   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:54.504956   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:54.505003   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:54.542093   70480 cri.go:89] found id: ""
	I0729 11:51:54.542133   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.542141   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:54.542149   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:54.542161   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:54.555521   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:54.555549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:54.639080   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:54.639104   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:54.639115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.721817   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:54.721858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:54.760794   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:54.760831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:54.681184   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.179812   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.180919   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:56.978928   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.479018   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.314549   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:57.328865   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:57.328941   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:57.364546   70480 cri.go:89] found id: ""
	I0729 11:51:57.364577   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.364587   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:57.364594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:57.364665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:57.401965   70480 cri.go:89] found id: ""
	I0729 11:51:57.401994   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.402005   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:57.402013   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:57.402072   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:57.436918   70480 cri.go:89] found id: ""
	I0729 11:51:57.436942   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.436975   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:57.436983   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:57.437042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:57.471476   70480 cri.go:89] found id: ""
	I0729 11:51:57.471503   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.471511   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:57.471519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:57.471576   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:57.509934   70480 cri.go:89] found id: ""
	I0729 11:51:57.509962   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.509972   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:57.509980   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:57.510038   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:57.552095   70480 cri.go:89] found id: ""
	I0729 11:51:57.552123   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.552133   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:57.552141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:57.552204   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:57.586486   70480 cri.go:89] found id: ""
	I0729 11:51:57.586507   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.586514   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:57.586519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:57.586580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:57.622708   70480 cri.go:89] found id: ""
	I0729 11:51:57.622737   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.622746   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:57.622757   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:57.622771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:57.637102   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:57.637133   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:57.710960   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:57.710981   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:57.710994   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:57.803522   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:57.803559   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:57.845804   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:57.845838   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:01.680142   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.682844   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:01.978739   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.973441   70231 pod_ready.go:81] duration metric: took 4m0.000922355s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:03.973469   70231 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:03.973488   70231 pod_ready.go:38] duration metric: took 4m6.983171556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:03.973523   70231 kubeadm.go:597] duration metric: took 4m14.830269847s to restartPrimaryControlPlane
	W0729 11:52:03.973614   70231 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:03.973646   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:00.398227   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:00.412064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:00.412139   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:00.446661   70480 cri.go:89] found id: ""
	I0729 11:52:00.446716   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.446729   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:00.446741   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:00.446793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:00.482234   70480 cri.go:89] found id: ""
	I0729 11:52:00.482260   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.482270   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:00.482290   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:00.482357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:00.520087   70480 cri.go:89] found id: ""
	I0729 11:52:00.520125   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.520136   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:00.520143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:00.520203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:00.556889   70480 cri.go:89] found id: ""
	I0729 11:52:00.556913   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.556924   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:00.556931   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:00.556996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:00.593521   70480 cri.go:89] found id: ""
	I0729 11:52:00.593559   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.593569   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:00.593577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:00.593644   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:00.631849   70480 cri.go:89] found id: ""
	I0729 11:52:00.631879   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.631889   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:00.631897   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:00.631956   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:00.669206   70480 cri.go:89] found id: ""
	I0729 11:52:00.669235   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.669246   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:00.669254   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:00.669314   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:00.707653   70480 cri.go:89] found id: ""
	I0729 11:52:00.707681   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.707692   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:00.707702   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:00.707720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:00.722275   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:00.722305   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:00.796212   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:00.796240   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:00.796254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:00.882088   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:00.882135   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:00.924217   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:00.924248   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.479929   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:03.493148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:03.493211   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:03.528114   70480 cri.go:89] found id: ""
	I0729 11:52:03.528158   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.528169   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:03.528177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:03.528233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:03.564509   70480 cri.go:89] found id: ""
	I0729 11:52:03.564542   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.564552   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:03.564559   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:03.564628   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:03.599850   70480 cri.go:89] found id: ""
	I0729 11:52:03.599884   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.599897   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:03.599913   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:03.599977   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:03.638988   70480 cri.go:89] found id: ""
	I0729 11:52:03.639020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.639031   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:03.639039   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:03.639112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:03.678544   70480 cri.go:89] found id: ""
	I0729 11:52:03.678572   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.678581   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:03.678589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:03.678651   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:03.717265   70480 cri.go:89] found id: ""
	I0729 11:52:03.717297   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.717307   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:03.717314   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:03.717379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:03.756479   70480 cri.go:89] found id: ""
	I0729 11:52:03.756504   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.756512   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:03.756519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:03.756570   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:03.791992   70480 cri.go:89] found id: ""
	I0729 11:52:03.792020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.792031   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:03.792042   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:03.792056   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:03.881378   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:03.881417   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:03.918866   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:03.918902   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.972005   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:03.972041   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:03.989650   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:03.989680   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:04.069522   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:06.182277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:08.681543   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:06.570567   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:06.588524   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:06.588594   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:06.627028   70480 cri.go:89] found id: ""
	I0729 11:52:06.627071   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.627082   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:06.627089   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:06.627154   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:06.669504   70480 cri.go:89] found id: ""
	I0729 11:52:06.669536   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.669548   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:06.669558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:06.669620   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:06.709546   70480 cri.go:89] found id: ""
	I0729 11:52:06.709573   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.709593   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:06.709603   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:06.709661   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:06.745539   70480 cri.go:89] found id: ""
	I0729 11:52:06.745568   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.745577   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:06.745585   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:06.745645   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:06.783924   70480 cri.go:89] found id: ""
	I0729 11:52:06.783960   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.783971   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:06.783978   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:06.784040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:06.819838   70480 cri.go:89] found id: ""
	I0729 11:52:06.819869   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.819879   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:06.819886   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:06.819950   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:06.858057   70480 cri.go:89] found id: ""
	I0729 11:52:06.858085   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.858102   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:06.858110   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:06.858186   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:06.896844   70480 cri.go:89] found id: ""
	I0729 11:52:06.896875   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.896885   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:06.896896   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:06.896911   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:06.953126   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:06.953166   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:06.967548   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:06.967579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:07.046716   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:07.046740   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:07.046756   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:07.129264   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:07.129299   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:09.672314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:09.687133   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:09.687202   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:09.728280   70480 cri.go:89] found id: ""
	I0729 11:52:09.728307   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.728316   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:09.728322   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:09.728379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:09.765178   70480 cri.go:89] found id: ""
	I0729 11:52:09.765214   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.765225   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:09.765233   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:09.765293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:09.801181   70480 cri.go:89] found id: ""
	I0729 11:52:09.801216   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.801225   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:09.801233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:09.801294   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:09.845088   70480 cri.go:89] found id: ""
	I0729 11:52:09.845118   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.845129   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:09.845137   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:09.845198   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:09.883874   70480 cri.go:89] found id: ""
	I0729 11:52:09.883907   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.883918   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:09.883925   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:09.883992   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:09.918273   70480 cri.go:89] found id: ""
	I0729 11:52:09.918302   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.918312   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:09.918321   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:09.918386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:09.978461   70480 cri.go:89] found id: ""
	I0729 11:52:09.978487   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.978494   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:09.978500   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:09.978546   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:10.022219   70480 cri.go:89] found id: ""
	I0729 11:52:10.022247   70480 logs.go:276] 0 containers: []
	W0729 11:52:10.022255   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:10.022264   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:10.022274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:10.076181   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:10.076228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:10.090567   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:10.090600   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:10.159576   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:10.159605   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:10.159620   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:11.181276   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:13.181424   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:10.242116   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:10.242165   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:12.784286   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:12.798015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:12.798111   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:12.834920   70480 cri.go:89] found id: ""
	I0729 11:52:12.834951   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.834962   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:12.834969   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:12.835030   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:12.876545   70480 cri.go:89] found id: ""
	I0729 11:52:12.876578   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.876589   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:12.876596   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:12.876665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:12.912912   70480 cri.go:89] found id: ""
	I0729 11:52:12.912937   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.912944   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:12.912950   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:12.913006   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:12.948118   70480 cri.go:89] found id: ""
	I0729 11:52:12.948148   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.948156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:12.948161   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:12.948207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:12.984339   70480 cri.go:89] found id: ""
	I0729 11:52:12.984364   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.984371   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:12.984377   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:12.984433   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:13.024870   70480 cri.go:89] found id: ""
	I0729 11:52:13.024906   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.024916   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:13.024924   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:13.024989   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:13.067951   70480 cri.go:89] found id: ""
	I0729 11:52:13.067988   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.067999   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:13.068007   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:13.068068   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:13.105095   70480 cri.go:89] found id: ""
	I0729 11:52:13.105126   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.105136   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:13.105144   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:13.105158   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:13.167486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:13.167537   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:13.183020   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:13.183046   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:13.264702   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:13.264734   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:13.264750   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:13.346760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:13.346795   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:15.887849   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:15.901560   70480 kubeadm.go:597] duration metric: took 4m4.683363388s to restartPrimaryControlPlane
	W0729 11:52:15.901628   70480 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:15.901659   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:16.377916   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.396247   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.408224   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.420998   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.421024   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.421073   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.432646   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.432712   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.444442   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.455098   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.455175   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.469621   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.482797   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.482869   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.495535   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.508873   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.508967   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.519797   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.590877   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:52:16.590933   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.768006   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.768167   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.768313   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:16.980586   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:16.523230   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.013927797s)
	I0729 11:52:16.523296   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.541674   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.553585   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.565171   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.565196   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.565237   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.575919   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.576023   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.588641   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.599947   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.600028   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.612623   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.624420   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.624486   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.639271   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.649979   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.650057   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.661423   69907 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.718013   69907 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:16.718138   69907 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.870793   69907 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.870955   69907 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.871090   69907 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:17.100094   69907 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:17.101792   69907 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:17.101895   69907 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:17.101999   69907 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:17.102129   69907 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:17.102237   69907 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:17.102339   69907 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:17.102419   69907 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:17.102523   69907 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:17.102607   69907 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:17.102731   69907 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:17.103613   69907 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:17.103841   69907 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:17.103923   69907 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.439592   69907 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.517503   69907 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:17.731672   69907 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.877789   69907 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.930274   69907 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.930777   69907 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:17.933362   69907 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:16.982617   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:16.982732   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:16.982826   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:16.982935   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:16.983015   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:16.983079   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:16.983127   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:16.983180   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:16.983230   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:16.983291   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:16.983354   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:16.983386   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:16.983433   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.523710   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.622636   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.732508   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.869921   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.892581   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.893213   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.893300   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.049043   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:17.935629   69907 out.go:204]   - Booting up control plane ...
	I0729 11:52:17.935753   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:17.935870   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:17.935955   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:17.961756   69907 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.962814   69907 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.962879   69907 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.102662   69907 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:18.102806   69907 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:15.181970   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:17.682108   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:18.051012   70480 out.go:204]   - Booting up control plane ...
	I0729 11:52:18.051192   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:18.058194   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:18.062340   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:18.063509   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:18.066121   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:52:19.116356   69907 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.010567801s
	I0729 11:52:19.116461   69907 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:24.118059   69907 kubeadm.go:310] [api-check] The API server is healthy after 5.002510977s
	I0729 11:52:24.132586   69907 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:24.148251   69907 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:24.188769   69907 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:24.188956   69907 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-731235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:24.205790   69907 kubeadm.go:310] [bootstrap-token] Using token: pvm7ux.41geojc66jibd993
	I0729 11:52:20.181703   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:22.181889   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.182317   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.207334   69907 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:24.207519   69907 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:24.213637   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:24.226771   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:24.231379   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:24.239349   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:24.248803   69907 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:24.524966   69907 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:24.961557   69907 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:25.522876   69907 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:25.523985   69907 kubeadm.go:310] 
	I0729 11:52:25.524083   69907 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:25.524093   69907 kubeadm.go:310] 
	I0729 11:52:25.524203   69907 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:25.524234   69907 kubeadm.go:310] 
	I0729 11:52:25.524273   69907 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:25.524353   69907 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:25.524441   69907 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:25.524460   69907 kubeadm.go:310] 
	I0729 11:52:25.524520   69907 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:25.524527   69907 kubeadm.go:310] 
	I0729 11:52:25.524578   69907 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:25.524584   69907 kubeadm.go:310] 
	I0729 11:52:25.524625   69907 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:25.524728   69907 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:25.524834   69907 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:25.524843   69907 kubeadm.go:310] 
	I0729 11:52:25.524957   69907 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:25.525047   69907 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:25.525054   69907 kubeadm.go:310] 
	I0729 11:52:25.525175   69907 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525314   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:25.525343   69907 kubeadm.go:310] 	--control-plane 
	I0729 11:52:25.525351   69907 kubeadm.go:310] 
	I0729 11:52:25.525449   69907 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:25.525463   69907 kubeadm.go:310] 
	I0729 11:52:25.525569   69907 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525709   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:25.526283   69907 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:25.526361   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:52:25.526378   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:25.528362   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:25.529726   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:25.546760   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:25.571336   69907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:25.571457   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-731235 minikube.k8s.io/updated_at=2024_07_29T11_52_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=embed-certs-731235 minikube.k8s.io/primary=true
	I0729 11:52:25.571460   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:25.600643   69907 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:25.771231   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.271938   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.771337   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.271880   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.772276   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.271327   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.771854   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.680959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.180277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.271904   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:29.771958   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.271342   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.771316   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.271539   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.771490   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.271537   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.771969   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.271498   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.771963   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.681002   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.180450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.271709   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:34.771968   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.271985   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.771798   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.271877   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.771950   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.271225   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.771622   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.271354   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.369678   69907 kubeadm.go:1113] duration metric: took 12.798280829s to wait for elevateKubeSystemPrivileges
	I0729 11:52:38.369716   69907 kubeadm.go:394] duration metric: took 5m10.090728575s to StartCluster
	I0729 11:52:38.369737   69907 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.369812   69907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:38.371527   69907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.371774   69907 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:38.371829   69907 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:38.371904   69907 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-731235"
	I0729 11:52:38.371925   69907 addons.go:69] Setting default-storageclass=true in profile "embed-certs-731235"
	I0729 11:52:38.371956   69907 addons.go:69] Setting metrics-server=true in profile "embed-certs-731235"
	I0729 11:52:38.371977   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:38.371991   69907 addons.go:234] Setting addon metrics-server=true in "embed-certs-731235"
	W0729 11:52:38.371999   69907 addons.go:243] addon metrics-server should already be in state true
	I0729 11:52:38.372041   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.371966   69907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-731235"
	I0729 11:52:38.371936   69907 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-731235"
	W0729 11:52:38.372204   69907 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:38.372240   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.372365   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372402   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372460   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372615   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372662   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.373455   69907 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:38.374757   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:38.388333   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0729 11:52:38.388901   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.389443   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.389467   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.389661   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0729 11:52:38.389858   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.390469   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.390499   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.390717   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.391258   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.391278   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.391622   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0729 11:52:38.391655   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.391937   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.391966   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.392511   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.392538   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.392904   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.393459   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.393491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.395933   69907 addons.go:234] Setting addon default-storageclass=true in "embed-certs-731235"
	W0729 11:52:38.395953   69907 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:38.395980   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.396342   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.396371   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.411784   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0729 11:52:38.412254   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.412549   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34211
	I0729 11:52:38.412811   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.412831   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.412911   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.413173   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413340   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.413470   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.413488   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.413830   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413997   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.414897   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0729 11:52:38.415312   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.415395   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.415753   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.415772   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.415918   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.416126   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.416663   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.416690   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.418043   69907 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:38.418047   69907 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:38.419620   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:38.419640   69907 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:38.419661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.419693   69907 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:38.419702   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:38.419714   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.423646   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424115   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424184   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424208   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424370   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.424573   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.424631   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424647   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424722   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.424821   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.425101   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.425266   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.425394   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.425528   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.432777   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0729 11:52:38.433219   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.433735   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.433759   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.434121   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.434299   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.435957   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.436176   69907 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.436195   69907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:38.436216   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.438989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439431   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.439508   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439627   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.439783   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.439929   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.440077   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.598513   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:38.627199   69907 node_ready.go:35] waiting up to 6m0s for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639168   69907 node_ready.go:49] node "embed-certs-731235" has status "Ready":"True"
	I0729 11:52:38.639199   69907 node_ready.go:38] duration metric: took 11.953793ms for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639208   69907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:38.644562   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:38.678019   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:38.678042   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:38.706214   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:52:38.706247   69907 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:52:38.745796   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.745824   69907 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:38.767879   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.778016   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.790742   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:36.181329   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:38.183254   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:39.974095   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196041477s)
	I0729 11:52:39.974096   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206172307s)
	I0729 11:52:39.974194   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974247   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974203   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974345   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974811   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974831   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974840   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974847   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974857   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.974925   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974938   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974946   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974955   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.975075   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.975165   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.975244   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976561   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.976579   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976577   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.976589   69907 addons.go:475] Verifying addon metrics-server=true in "embed-certs-731235"
	I0729 11:52:39.999773   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.999799   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.000097   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.000118   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.026995   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.236214166s)
	I0729 11:52:40.027052   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027063   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027383   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.027402   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.027412   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027422   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027387   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029105   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.029109   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029124   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.031066   69907 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner
	I0729 11:52:36.127977   70231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.15430735s)
	I0729 11:52:36.128057   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:36.147540   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:36.159519   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:36.171332   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:36.171353   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:36.171406   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:52:36.182915   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:36.183084   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:36.193912   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:52:36.203972   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:36.204036   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:36.213886   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.223205   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:36.223260   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.235379   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:52:36.245392   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:36.245461   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:36.255495   70231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:36.468759   70231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:40.032797   69907 addons.go:510] duration metric: took 1.660964221s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner]
	I0729 11:52:40.654126   69907 pod_ready.go:102] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:41.173676   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.173708   69907 pod_ready.go:81] duration metric: took 2.529122203s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.173721   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183179   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.183207   69907 pod_ready.go:81] duration metric: took 9.478224ms for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183220   69907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192149   69907 pod_ready.go:92] pod "etcd-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.192177   69907 pod_ready.go:81] duration metric: took 8.949045ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192189   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199322   69907 pod_ready.go:92] pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.199347   69907 pod_ready.go:81] duration metric: took 7.150124ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199360   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210464   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.210491   69907 pod_ready.go:81] duration metric: took 11.123649ms for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210504   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549786   69907 pod_ready.go:92] pod "kube-proxy-ch48n" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.549814   69907 pod_ready.go:81] duration metric: took 339.30332ms for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549828   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949607   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.949629   69907 pod_ready.go:81] duration metric: took 399.794484ms for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949637   69907 pod_ready.go:38] duration metric: took 3.310420523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:41.949650   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:52:41.949732   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:41.967899   69907 api_server.go:72] duration metric: took 3.596093405s to wait for apiserver process to appear ...
	I0729 11:52:41.967933   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:52:41.967957   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:52:41.973064   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:52:41.974128   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:52:41.974151   69907 api_server.go:131] duration metric: took 6.211514ms to wait for apiserver health ...
	I0729 11:52:41.974158   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:52:42.152607   69907 system_pods.go:59] 9 kube-system pods found
	I0729 11:52:42.152648   69907 system_pods.go:61] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.152656   69907 system_pods.go:61] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.152663   69907 system_pods.go:61] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.152670   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.152674   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.152680   69907 system_pods.go:61] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.152685   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.152694   69907 system_pods.go:61] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.152702   69907 system_pods.go:61] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.152714   69907 system_pods.go:74] duration metric: took 178.548453ms to wait for pod list to return data ...
	I0729 11:52:42.152728   69907 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:52:42.349148   69907 default_sa.go:45] found service account: "default"
	I0729 11:52:42.349182   69907 default_sa.go:55] duration metric: took 196.446704ms for default service account to be created ...
	I0729 11:52:42.349192   69907 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:52:42.552384   69907 system_pods.go:86] 9 kube-system pods found
	I0729 11:52:42.552416   69907 system_pods.go:89] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.552425   69907 system_pods.go:89] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.552431   69907 system_pods.go:89] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.552437   69907 system_pods.go:89] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.552442   69907 system_pods.go:89] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.552448   69907 system_pods.go:89] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.552453   69907 system_pods.go:89] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.552462   69907 system_pods.go:89] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.552472   69907 system_pods.go:89] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.552483   69907 system_pods.go:126] duration metric: took 203.284903ms to wait for k8s-apps to be running ...
	I0729 11:52:42.552492   69907 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:52:42.552546   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:42.569158   69907 system_svc.go:56] duration metric: took 16.657226ms WaitForService to wait for kubelet
	I0729 11:52:42.569186   69907 kubeadm.go:582] duration metric: took 4.19738713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:52:42.569205   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:52:42.749356   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:52:42.749385   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:52:42.749399   69907 node_conditions.go:105] duration metric: took 180.189313ms to run NodePressure ...
	I0729 11:52:42.749411   69907 start.go:241] waiting for startup goroutines ...
	I0729 11:52:42.749417   69907 start.go:246] waiting for cluster config update ...
	I0729 11:52:42.749427   69907 start.go:255] writing updated cluster config ...
	I0729 11:52:42.749672   69907 ssh_runner.go:195] Run: rm -f paused
	I0729 11:52:42.807579   69907 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:52:42.809609   69907 out.go:177] * Done! kubectl is now configured to use "embed-certs-731235" cluster and "default" namespace by default
	I0729 11:52:40.681693   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:42.685146   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.646240   70231 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:46.646305   70231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:46.646407   70231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:46.646537   70231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:46.646653   70231 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:46.646749   70231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:46.648483   70231 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:46.648572   70231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:46.648626   70231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:46.648719   70231 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:46.648820   70231 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:46.648941   70231 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:46.649013   70231 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:46.649068   70231 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:46.649121   70231 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:46.649182   70231 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:46.649248   70231 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:46.649294   70231 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:46.649378   70231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:46.649455   70231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:46.649529   70231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:46.649609   70231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:46.649693   70231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:46.649778   70231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:46.649912   70231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:46.650023   70231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:46.651575   70231 out.go:204]   - Booting up control plane ...
	I0729 11:52:46.651657   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:46.651723   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:46.651793   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:46.651893   70231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:46.651963   70231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:46.651996   70231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:46.652155   70231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:46.652258   70231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:46.652315   70231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00230111s
	I0729 11:52:46.652381   70231 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:46.652444   70231 kubeadm.go:310] [api-check] The API server is healthy after 5.502783682s
	I0729 11:52:46.652588   70231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:46.652734   70231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:46.652802   70231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:46.652991   70231 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-754486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:46.653041   70231 kubeadm.go:310] [bootstrap-token] Using token: 341fdm.tm8thttie16wi2qy
	I0729 11:52:46.654343   70231 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:46.654458   70231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:46.654555   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:46.654745   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:46.654914   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:46.655023   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:46.655094   70231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:46.655202   70231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:46.655242   70231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:46.655285   70231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:46.655293   70231 kubeadm.go:310] 
	I0729 11:52:46.655349   70231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:46.655355   70231 kubeadm.go:310] 
	I0729 11:52:46.655427   70231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:46.655433   70231 kubeadm.go:310] 
	I0729 11:52:46.655453   70231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:46.655509   70231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:46.655576   70231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:46.655586   70231 kubeadm.go:310] 
	I0729 11:52:46.655653   70231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:46.655660   70231 kubeadm.go:310] 
	I0729 11:52:46.655702   70231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:46.655708   70231 kubeadm.go:310] 
	I0729 11:52:46.655772   70231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:46.655861   70231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:46.655975   70231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:46.656000   70231 kubeadm.go:310] 
	I0729 11:52:46.656118   70231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:46.656223   70231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:46.656233   70231 kubeadm.go:310] 
	I0729 11:52:46.656344   70231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656477   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:46.656502   70231 kubeadm.go:310] 	--control-plane 
	I0729 11:52:46.656508   70231 kubeadm.go:310] 
	I0729 11:52:46.656580   70231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:46.656586   70231 kubeadm.go:310] 
	I0729 11:52:46.656669   70231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656831   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:46.656851   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:52:46.656862   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:46.659007   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:45.180215   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:47.181213   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.660238   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:46.671866   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:46.692991   70231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-754486 minikube.k8s.io/updated_at=2024_07_29T11_52_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=default-k8s-diff-port-754486 minikube.k8s.io/primary=true
	I0729 11:52:46.897228   70231 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:46.897373   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.398474   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.898225   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.397547   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.897716   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.398393   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.898110   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.680176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:51.680900   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:53.681105   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:50.397646   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:50.897618   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.398130   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.897444   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.398334   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.898233   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.397587   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.898255   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.397634   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.898138   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.182828   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:56.674072   69419 pod_ready.go:81] duration metric: took 4m0.000131876s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:56.674094   69419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:56.674113   69419 pod_ready.go:38] duration metric: took 4m9.054741116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:56.674144   69419 kubeadm.go:597] duration metric: took 4m16.587842765s to restartPrimaryControlPlane
	W0729 11:52:56.674197   69419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:56.674234   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:55.398096   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:55.897565   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.397785   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.897860   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.397925   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.897989   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.397500   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.897468   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.398228   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.483894   70231 kubeadm.go:1113] duration metric: took 12.790894124s to wait for elevateKubeSystemPrivileges
	I0729 11:52:59.483924   70231 kubeadm.go:394] duration metric: took 5m10.397319925s to StartCluster
	I0729 11:52:59.483941   70231 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.484019   70231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:59.485737   70231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.486008   70231 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:59.486074   70231 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:59.486163   70231 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486195   70231 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754486"
	I0729 11:52:59.486196   70231 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486210   70231 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486238   70231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754486"
	I0729 11:52:59.486251   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:59.486256   70231 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.486266   70231 addons.go:243] addon metrics-server should already be in state true
	W0729 11:52:59.486205   70231 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:59.486295   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486307   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486550   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486555   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486572   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486573   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486617   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.487888   70231 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:59.489501   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:59.502095   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0729 11:52:59.502614   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.502832   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0729 11:52:59.503207   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503229   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.503252   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.503805   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503829   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.504128   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504216   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504317   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.504801   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.504847   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.505348   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I0729 11:52:59.505701   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.506318   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.506342   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.506738   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.507261   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.507290   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.508065   70231 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.508084   70231 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:59.508111   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.508423   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.508462   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.526240   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0729 11:52:59.526269   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0729 11:52:59.526313   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0729 11:52:59.526654   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526763   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526826   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.527214   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527230   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527351   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527388   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527405   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527429   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527668   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527715   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527901   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.527931   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.528030   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.528913   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.528940   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.529836   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.530004   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.532077   70231 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:59.532101   70231 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:59.533597   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:59.533619   70231 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:59.533641   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.533645   70231 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:59.533659   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:59.533681   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.538047   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538082   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538654   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538669   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538679   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538686   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538693   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538864   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538889   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539065   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539239   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539237   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.539374   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.546505   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0729 11:52:59.546918   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.547428   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.547455   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.547790   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.548011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.549607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.549899   70231 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.549915   70231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:59.549934   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.553591   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.555251   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.555814   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.556005   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.556154   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.758973   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:59.809677   70231 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818208   70231 node_ready.go:49] node "default-k8s-diff-port-754486" has status "Ready":"True"
	I0729 11:52:59.818252   70231 node_ready.go:38] duration metric: took 8.523612ms for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818264   70231 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:59.825340   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:59.935053   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.954324   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:59.954346   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:59.962991   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:00.052728   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:00.052754   70231 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:00.168588   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.168620   70231 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:58.067350   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:52:58.067472   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:52:58.067690   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:00.230134   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.485028   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485062   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485424   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485447   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.485461   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485470   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485716   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485731   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.502040   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.502061   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.502386   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.502410   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.400774   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.437744399s)
	I0729 11:53:01.400842   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.400856   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401229   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401248   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.401284   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.401378   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.401387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401637   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401648   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408496   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.178316081s)
	I0729 11:53:01.408558   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408577   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.408859   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.408879   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408859   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.408904   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408917   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.409181   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.409218   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.409232   70231 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754486"
	I0729 11:53:01.409254   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.411682   70231 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 11:53:01.413048   70231 addons.go:510] duration metric: took 1.926975712s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 11:53:01.831515   70231 pod_ready.go:102] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:02.331492   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.331518   70231 pod_ready.go:81] duration metric: took 2.506145957s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.331530   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341152   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.341175   70231 pod_ready.go:81] duration metric: took 9.638268ms for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341183   70231 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346946   70231 pod_ready.go:92] pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.346971   70231 pod_ready.go:81] duration metric: took 5.77844ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346981   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351401   70231 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.351423   70231 pod_ready.go:81] duration metric: took 4.432109ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351435   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355410   70231 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.355428   70231 pod_ready.go:81] duration metric: took 3.986166ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355439   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729604   70231 pod_ready.go:92] pod "kube-proxy-7gkd8" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.729634   70231 pod_ready.go:81] duration metric: took 374.188296ms for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729653   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130027   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:03.130052   70231 pod_ready.go:81] duration metric: took 400.392433ms for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130061   70231 pod_ready.go:38] duration metric: took 3.311785643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:03.130077   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:03.130134   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:03.152134   70231 api_server.go:72] duration metric: took 3.666086394s to wait for apiserver process to appear ...
	I0729 11:53:03.152164   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:03.152188   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:53:03.157357   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:53:03.158235   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:53:03.158254   70231 api_server.go:131] duration metric: took 6.083486ms to wait for apiserver health ...
	I0729 11:53:03.158261   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:03.333517   70231 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:03.333547   70231 system_pods.go:61] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.333552   70231 system_pods.go:61] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.333556   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.333559   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.333563   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.333566   70231 system_pods.go:61] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.333568   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.333574   70231 system_pods.go:61] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.333577   70231 system_pods.go:61] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.333586   70231 system_pods.go:74] duration metric: took 175.319992ms to wait for pod list to return data ...
	I0729 11:53:03.333595   70231 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:03.529964   70231 default_sa.go:45] found service account: "default"
	I0729 11:53:03.529989   70231 default_sa.go:55] duration metric: took 196.388041ms for default service account to be created ...
	I0729 11:53:03.529998   70231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:03.733015   70231 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:03.733051   70231 system_pods.go:89] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.733058   70231 system_pods.go:89] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.733062   70231 system_pods.go:89] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.733066   70231 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.733070   70231 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.733075   70231 system_pods.go:89] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.733081   70231 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.733090   70231 system_pods.go:89] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.733097   70231 system_pods.go:89] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.733108   70231 system_pods.go:126] duration metric: took 203.104097ms to wait for k8s-apps to be running ...
	I0729 11:53:03.733121   70231 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:03.733165   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:03.749014   70231 system_svc.go:56] duration metric: took 15.886799ms WaitForService to wait for kubelet
	I0729 11:53:03.749045   70231 kubeadm.go:582] duration metric: took 4.263001752s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:03.749070   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:03.930356   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:03.930380   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:03.930390   70231 node_conditions.go:105] duration metric: took 181.31486ms to run NodePressure ...
	I0729 11:53:03.930399   70231 start.go:241] waiting for startup goroutines ...
	I0729 11:53:03.930406   70231 start.go:246] waiting for cluster config update ...
	I0729 11:53:03.930417   70231 start.go:255] writing updated cluster config ...
	I0729 11:53:03.930690   70231 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:03.984862   70231 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:53:03.986829   70231 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754486" cluster and "default" namespace by default
	I0729 11:53:03.068218   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:03.068464   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:13.068777   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:13.069011   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:23.088658   69419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.414400207s)
	I0729 11:53:23.088743   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:23.104735   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:53:23.115145   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:53:23.125890   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:53:23.125913   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:53:23.125969   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:53:23.136854   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:53:23.136914   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:53:23.148400   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:53:23.157595   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:53:23.157670   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:53:23.167281   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.177119   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:53:23.177176   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.187359   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:53:23.197033   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:53:23.197110   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:53:23.207490   69419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:53:23.254112   69419 kubeadm.go:310] W0729 11:53:23.231768    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.254983   69419 kubeadm.go:310] W0729 11:53:23.232599    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.383993   69419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:53:32.410305   69419 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 11:53:32.410378   69419 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:53:32.410483   69419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:53:32.410611   69419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:53:32.410758   69419 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 11:53:32.410840   69419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:53:32.412547   69419 out.go:204]   - Generating certificates and keys ...
	I0729 11:53:32.412651   69419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:53:32.412761   69419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:53:32.412879   69419 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:53:32.412973   69419 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:53:32.413101   69419 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:53:32.413176   69419 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:53:32.413228   69419 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:53:32.413279   69419 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:53:32.413346   69419 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:53:32.413427   69419 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:53:32.413482   69419 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:53:32.413577   69419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:53:32.413644   69419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:53:32.413717   69419 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:53:32.413795   69419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:53:32.413880   69419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:53:32.413970   69419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:53:32.414075   69419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:53:32.414167   69419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:53:32.415701   69419 out.go:204]   - Booting up control plane ...
	I0729 11:53:32.415817   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:53:32.415927   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:53:32.416034   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:53:32.416205   69419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:53:32.416312   69419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:53:32.416350   69419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:53:32.416466   69419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:53:32.416564   69419 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:53:32.416658   69419 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.786281ms
	I0729 11:53:32.416730   69419 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:53:32.416803   69419 kubeadm.go:310] [api-check] The API server is healthy after 5.501546935s
	I0729 11:53:32.416941   69419 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:53:32.417099   69419 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:53:32.417184   69419 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:53:32.417349   69419 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-297799 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:53:32.417434   69419 kubeadm.go:310] [bootstrap-token] Using token: 9fg92x.rq4eihzyqcflv0gj
	I0729 11:53:32.418783   69419 out.go:204]   - Configuring RBAC rules ...
	I0729 11:53:32.418899   69419 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:53:32.418969   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:53:32.419100   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:53:32.419239   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:53:32.419337   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:53:32.419423   69419 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:53:32.419544   69419 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:53:32.419594   69419 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:53:32.419633   69419 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:53:32.419639   69419 kubeadm.go:310] 
	I0729 11:53:32.419686   69419 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:53:32.419695   69419 kubeadm.go:310] 
	I0729 11:53:32.419756   69419 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:53:32.419762   69419 kubeadm.go:310] 
	I0729 11:53:32.419802   69419 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:53:32.419858   69419 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:53:32.419901   69419 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:53:32.419911   69419 kubeadm.go:310] 
	I0729 11:53:32.419965   69419 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:53:32.419971   69419 kubeadm.go:310] 
	I0729 11:53:32.420017   69419 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:53:32.420025   69419 kubeadm.go:310] 
	I0729 11:53:32.420072   69419 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:53:32.420137   69419 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:53:32.420200   69419 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:53:32.420205   69419 kubeadm.go:310] 
	I0729 11:53:32.420277   69419 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:53:32.420340   69419 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:53:32.420345   69419 kubeadm.go:310] 
	I0729 11:53:32.420416   69419 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420506   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:53:32.420531   69419 kubeadm.go:310] 	--control-plane 
	I0729 11:53:32.420544   69419 kubeadm.go:310] 
	I0729 11:53:32.420645   69419 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:53:32.420654   69419 kubeadm.go:310] 
	I0729 11:53:32.420765   69419 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420895   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:53:32.420911   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:53:32.420920   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:53:32.422438   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:53:32.423731   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:53:32.435581   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:53:32.457560   69419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:53:32.457665   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:32.457719   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-297799 minikube.k8s.io/updated_at=2024_07_29T11_53_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=no-preload-297799 minikube.k8s.io/primary=true
	I0729 11:53:32.486072   69419 ops.go:34] apiserver oom_adj: -16
	I0729 11:53:32.674003   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.174011   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.674077   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:34.174383   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.069886   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:33.070112   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:34.674510   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.174124   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.674135   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.174420   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.674370   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.787916   69419 kubeadm.go:1113] duration metric: took 4.330303492s to wait for elevateKubeSystemPrivileges
	I0729 11:53:36.787961   69419 kubeadm.go:394] duration metric: took 4m56.766239734s to StartCluster
	I0729 11:53:36.787983   69419 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.788071   69419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:53:36.790440   69419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.790747   69419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:53:36.790823   69419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:53:36.790914   69419 addons.go:69] Setting storage-provisioner=true in profile "no-preload-297799"
	I0729 11:53:36.790929   69419 addons.go:69] Setting default-storageclass=true in profile "no-preload-297799"
	I0729 11:53:36.790946   69419 addons.go:234] Setting addon storage-provisioner=true in "no-preload-297799"
	W0729 11:53:36.790956   69419 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:53:36.790970   69419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-297799"
	I0729 11:53:36.790963   69419 addons.go:69] Setting metrics-server=true in profile "no-preload-297799"
	I0729 11:53:36.790990   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791009   69419 addons.go:234] Setting addon metrics-server=true in "no-preload-297799"
	W0729 11:53:36.791023   69419 addons.go:243] addon metrics-server should already be in state true
	I0729 11:53:36.790938   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:53:36.791055   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791315   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791350   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791376   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791395   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791424   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791403   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.792400   69419 out.go:177] * Verifying Kubernetes components...
	I0729 11:53:36.793837   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:53:36.807811   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0729 11:53:36.807845   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0729 11:53:36.808292   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808347   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808844   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808863   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.808971   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808992   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.809204   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809364   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809708   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809727   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.809868   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809903   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.810196   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33633
	I0729 11:53:36.810602   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.811069   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.811085   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.811578   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.811789   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.815254   69419 addons.go:234] Setting addon default-storageclass=true in "no-preload-297799"
	W0729 11:53:36.815319   69419 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:53:36.815351   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.815722   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.815767   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.826661   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I0729 11:53:36.827259   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.827925   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.827947   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.828288   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.828475   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.829152   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0729 11:53:36.829483   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.829942   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.829954   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.830335   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.830448   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.830512   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.831779   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0729 11:53:36.832366   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.832499   69419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:53:36.832831   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.832843   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.833105   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.833659   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.833692   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.834047   69419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:36.834218   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:53:36.834243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.835105   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.837003   69419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:53:36.837668   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838105   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.838130   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838304   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:53:36.838322   69419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:53:36.838340   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.838347   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.838505   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.838661   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.838834   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.841306   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841724   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.841742   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841909   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.842081   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.842243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.842405   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.853959   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0729 11:53:36.854349   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.854825   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.854849   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.855184   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.855412   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.857073   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.857352   69419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:36.857363   69419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:53:36.857377   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.860376   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860804   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.860826   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860973   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.861121   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.861249   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.861352   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:37.000840   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:53:37.058535   69419 node_ready.go:35] waiting up to 6m0s for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069231   69419 node_ready.go:49] node "no-preload-297799" has status "Ready":"True"
	I0729 11:53:37.069260   69419 node_ready.go:38] duration metric: took 10.69136ms for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069272   69419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:37.080726   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:37.122837   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:37.154216   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:37.177797   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:53:37.177821   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:53:37.298520   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:37.298546   69419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:37.410911   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:37.410935   69419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:53:37.502799   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:38.337421   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.214547185s)
	I0729 11:53:38.337457   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.183203433s)
	I0729 11:53:38.337490   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337491   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337500   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337506   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337775   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337790   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337800   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337807   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337843   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.337844   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337865   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337873   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337880   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.338007   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338016   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338091   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338102   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338108   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.417894   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.417921   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.418225   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.418250   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.418272   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642279   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139432943s)
	I0729 11:53:38.642328   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642343   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642656   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642677   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642680   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642687   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642712   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642956   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642975   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642985   69419 addons.go:475] Verifying addon metrics-server=true in "no-preload-297799"
	I0729 11:53:38.642990   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.644958   69419 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 11:53:38.646417   69419 addons.go:510] duration metric: took 1.855596723s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 11:53:39.091531   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:41.587827   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.088096   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.586486   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.586510   69419 pod_ready.go:81] duration metric: took 7.505759998s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.586521   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591372   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.591394   69419 pod_ready.go:81] duration metric: took 4.865716ms for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591404   69419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596377   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.596401   69419 pod_ready.go:81] duration metric: took 4.988985ms for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596412   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603151   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.603176   69419 pod_ready.go:81] duration metric: took 6.75609ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603187   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609494   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.609514   69419 pod_ready.go:81] duration metric: took 6.319727ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609526   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984476   69419 pod_ready.go:92] pod "kube-proxy-blx4g" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.984505   69419 pod_ready.go:81] duration metric: took 374.971379ms for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984517   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385763   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:45.385792   69419 pod_ready.go:81] duration metric: took 401.266749ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385802   69419 pod_ready.go:38] duration metric: took 8.316518469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:45.385821   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:45.385887   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:45.404065   69419 api_server.go:72] duration metric: took 8.613282557s to wait for apiserver process to appear ...
	I0729 11:53:45.404093   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:45.404114   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:53:45.408027   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:53:45.408985   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:53:45.409011   69419 api_server.go:131] duration metric: took 4.91124ms to wait for apiserver health ...
	I0729 11:53:45.409020   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:45.587520   69419 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:45.587552   69419 system_pods.go:61] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.587556   69419 system_pods.go:61] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.587560   69419 system_pods.go:61] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.587563   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.587568   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.587571   69419 system_pods.go:61] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.587574   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.587580   69419 system_pods.go:61] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.587584   69419 system_pods.go:61] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.587590   69419 system_pods.go:74] duration metric: took 178.563924ms to wait for pod list to return data ...
	I0729 11:53:45.587596   69419 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:45.784611   69419 default_sa.go:45] found service account: "default"
	I0729 11:53:45.784640   69419 default_sa.go:55] duration metric: took 197.037896ms for default service account to be created ...
	I0729 11:53:45.784659   69419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:45.992937   69419 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:45.992973   69419 system_pods.go:89] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.992982   69419 system_pods.go:89] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.992990   69419 system_pods.go:89] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.992996   69419 system_pods.go:89] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.993003   69419 system_pods.go:89] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.993010   69419 system_pods.go:89] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.993017   69419 system_pods.go:89] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.993027   69419 system_pods.go:89] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.993037   69419 system_pods.go:89] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.993047   69419 system_pods.go:126] duration metric: took 208.382518ms to wait for k8s-apps to be running ...
	I0729 11:53:45.993059   69419 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:45.993109   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:46.012248   69419 system_svc.go:56] duration metric: took 19.180103ms WaitForService to wait for kubelet
	I0729 11:53:46.012284   69419 kubeadm.go:582] duration metric: took 9.221504322s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:46.012309   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:46.186674   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:46.186723   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:46.186736   69419 node_conditions.go:105] duration metric: took 174.422508ms to run NodePressure ...
	I0729 11:53:46.186747   69419 start.go:241] waiting for startup goroutines ...
	I0729 11:53:46.186753   69419 start.go:246] waiting for cluster config update ...
	I0729 11:53:46.186763   69419 start.go:255] writing updated cluster config ...
	I0729 11:53:46.187032   69419 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:46.236558   69419 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 11:53:46.239388   69419 out.go:177] * Done! kubectl is now configured to use "no-preload-297799" cluster and "default" namespace by default
	I0729 11:54:13.072567   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:13.072841   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:54:13.072861   70480 kubeadm.go:310] 
	I0729 11:54:13.072916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:54:13.072951   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:54:13.072959   70480 kubeadm.go:310] 
	I0729 11:54:13.072987   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:54:13.073016   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:54:13.073156   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:54:13.073181   70480 kubeadm.go:310] 
	I0729 11:54:13.073289   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:54:13.073328   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:54:13.073365   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:54:13.073373   70480 kubeadm.go:310] 
	I0729 11:54:13.073475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:54:13.073562   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:54:13.073572   70480 kubeadm.go:310] 
	I0729 11:54:13.073704   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:54:13.073835   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:54:13.073941   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:54:13.074115   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:54:13.074132   70480 kubeadm.go:310] 
	I0729 11:54:13.074287   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:54:13.074407   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:54:13.074523   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 11:54:13.074665   70480 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 11:54:13.074737   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:54:13.546511   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:54:13.564193   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:54:13.576259   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:54:13.576282   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:54:13.576325   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:54:13.586785   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:54:13.586846   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:54:13.597376   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:54:13.607259   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:54:13.607330   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:54:13.619236   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.630278   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:54:13.630341   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.640574   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:54:13.650526   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:54:13.650584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:54:13.660941   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:54:13.742767   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:54:13.742852   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:54:13.911296   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:54:13.911467   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:54:13.911607   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:54:14.123645   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:54:14.126464   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:54:14.126931   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:54:14.126995   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:54:14.127067   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:54:14.127146   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:54:14.127238   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:54:14.127328   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:54:14.127424   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:54:14.127506   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:54:14.129559   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:54:14.130588   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:54:14.130792   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:54:14.130931   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:54:14.373822   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:54:14.518174   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:54:14.850815   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:54:14.955918   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:54:14.979731   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:54:14.979884   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:54:14.979950   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:54:15.134480   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:54:15.137488   70480 out.go:204]   - Booting up control plane ...
	I0729 11:54:15.137635   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:54:15.150435   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:54:15.151842   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:54:15.152652   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:54:15.155022   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:54:55.157641   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:54:55.157771   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:55.158065   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:00.158857   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:00.159153   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:10.159857   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:30.160336   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:30.160554   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:56:10.159858   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159872   70480 kubeadm.go:310] 
	I0729 11:56:10.159916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:56:10.159968   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:56:10.159976   70480 kubeadm.go:310] 
	I0729 11:56:10.160004   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:56:10.160034   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:56:10.160140   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:56:10.160160   70480 kubeadm.go:310] 
	I0729 11:56:10.160286   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:56:10.160348   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:56:10.160378   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:56:10.160384   70480 kubeadm.go:310] 
	I0729 11:56:10.160475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:56:10.160547   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:56:10.160556   70480 kubeadm.go:310] 
	I0729 11:56:10.160652   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:56:10.160756   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:56:10.160878   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:56:10.160976   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:56:10.160985   70480 kubeadm.go:310] 
	I0729 11:56:10.162126   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:56:10.162224   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:56:10.162399   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 11:56:10.162415   70480 kubeadm.go:394] duration metric: took 7m59.00013135s to StartCluster
	I0729 11:56:10.162473   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:56:10.162606   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:56:10.208183   70480 cri.go:89] found id: ""
	I0729 11:56:10.208206   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.208214   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:56:10.208219   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:56:10.208275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:56:10.256265   70480 cri.go:89] found id: ""
	I0729 11:56:10.256293   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.256304   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:56:10.256445   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:56:10.256515   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:56:10.296625   70480 cri.go:89] found id: ""
	I0729 11:56:10.296668   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.296688   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:56:10.296704   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:56:10.296791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:56:10.333771   70480 cri.go:89] found id: ""
	I0729 11:56:10.333797   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.333824   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:56:10.333830   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:56:10.333887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:56:10.371746   70480 cri.go:89] found id: ""
	I0729 11:56:10.371772   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.371782   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:56:10.371789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:56:10.371850   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:56:10.408988   70480 cri.go:89] found id: ""
	I0729 11:56:10.409018   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.409028   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:56:10.409036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:56:10.409087   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:56:10.448706   70480 cri.go:89] found id: ""
	I0729 11:56:10.448731   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.448740   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:56:10.448749   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:56:10.448808   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:56:10.485549   70480 cri.go:89] found id: ""
	I0729 11:56:10.485577   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.485585   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:56:10.485594   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:56:10.485609   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:56:10.536953   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:56:10.536989   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:56:10.554870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:56:10.554908   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:56:10.675935   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:56:10.675966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:56:10.675983   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:56:10.792820   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:56:10.792858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 11:56:10.833493   70480 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 11:56:10.833537   70480 out.go:239] * 
	W0729 11:56:10.833616   70480 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.833644   70480 out.go:239] * 
	W0729 11:56:10.834607   70480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:56:10.838465   70480 out.go:177] 
	W0729 11:56:10.840213   70480 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.840257   70480 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 11:56:10.840282   70480 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 11:56:10.841869   70480 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.372701423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254568372681285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=141a396b-c871-4ee6-b644-54308fc14c91 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.373567942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=065c4965-e91b-4f6f-97b4-ed1ab4a49313 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.373639352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=065c4965-e91b-4f6f-97b4-ed1ab4a49313 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.373842383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee36092b45794c1091ae3dcb9c5c38bb131a59b45cde74b042de394332fbee,PodSandboxId:c1451dcfd7f415486db3ba9939b82683a2580d656f1beb23b5c145e828f5198e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254018747255452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afce5e3-3bcf-476d-9846-c57e98532d24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9b00ee071e357b4ca0b9d6ec91567f297a0fe207a61e6538b6eda02418b060,PodSandboxId:4f2629ba00a9ff413e7f662e1a69f65937b4e7e1dfb94567b7a0fa9c8ce509eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254018079880829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bnqrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2258a90-8b49-4cd3-9e84-6e3567ede3f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f49db7287541769a9c5841ac45bbca52be912aa3fd7d5bae1cc9d5d22542ced,PodSandboxId:5111f1fc15e65d39088fa90d640915618c1a226c55a4358f4a104db5ec2be201,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254017600495493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7n6s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
8e4916-ee1d-47ce-902b-7c6328514ca9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47520a7ce939c00172d33ae5b6db2c81fbb7301ec269f327baaf9097a15d8b5,PodSandboxId:535ebdf1385bd2aa287073fdc22b5feb291333ff875058695251177a6a81c13a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722254017113574428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blx4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892d6ac2-66bd-4af0-9bca-7018e1d51c1b,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1520e4956aff08169cf1bf92e39d8bb7cf1776327f167b35c5aa180b1955ebea,PodSandboxId:631fe808ce547fcbba71bff39a599eab957a72293394190efa0571fd5fd23018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722254006000392888,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9849f4439601378b9024a3ea6af8212ca083ca359a0ba4e40a5f2b18a3d8179,PodSandboxId:8ab8865d8537fd1f1bd5e966229bbe99b029b8c4b673e357a49c19adb012c1aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722254006017690656,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21543e2f80b25c052d45594e6ac1871b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace2035e6f2a6d7753c6b1c97aa6f8e5cdd7ae57bd452cb7a025bac96d958763,PodSandboxId:e03ca293b0943ddff9ec42373e53b7dfc32976b0f10ce231fe7c9bb077bbcc15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722254005962907639,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c956e5ee1dc919e60090f2d6a4e35a7c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b405cd58267974debe4f0092d98b65ec7ee23d139828e582e2d05c34185dc81,PodSandboxId:8d2b1da13933da60421943cb10e1ae777d5594ceccf0f9d5bb19e8813c565a44,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722254005907468938,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068113bfdd7d96276ec1d1442aa31b21,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e605ca417408e170cea9d80ff78a3aa5e15c87e2e8fb47235fed33945f24a30,PodSandboxId:2993c2108d0ae0d24f39f8cd80087b3420f6c5336b1994c15da59e6e26bbbd93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722253722315783186,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=065c4965-e91b-4f6f-97b4-ed1ab4a49313 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.413498249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf36c556-166d-4f77-bef4-56b2bcc06915 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.413595662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf36c556-166d-4f77-bef4-56b2bcc06915 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.414763132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=950f8bf1-ec2a-4a01-85ba-e988f7ff2add name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.415235370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254568415211405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=950f8bf1-ec2a-4a01-85ba-e988f7ff2add name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.415819280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ac0bf2c-2a57-485f-8b91-f92db9e7ca8d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.415889867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ac0bf2c-2a57-485f-8b91-f92db9e7ca8d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.416151289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee36092b45794c1091ae3dcb9c5c38bb131a59b45cde74b042de394332fbee,PodSandboxId:c1451dcfd7f415486db3ba9939b82683a2580d656f1beb23b5c145e828f5198e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254018747255452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afce5e3-3bcf-476d-9846-c57e98532d24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9b00ee071e357b4ca0b9d6ec91567f297a0fe207a61e6538b6eda02418b060,PodSandboxId:4f2629ba00a9ff413e7f662e1a69f65937b4e7e1dfb94567b7a0fa9c8ce509eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254018079880829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bnqrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2258a90-8b49-4cd3-9e84-6e3567ede3f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f49db7287541769a9c5841ac45bbca52be912aa3fd7d5bae1cc9d5d22542ced,PodSandboxId:5111f1fc15e65d39088fa90d640915618c1a226c55a4358f4a104db5ec2be201,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254017600495493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7n6s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
8e4916-ee1d-47ce-902b-7c6328514ca9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47520a7ce939c00172d33ae5b6db2c81fbb7301ec269f327baaf9097a15d8b5,PodSandboxId:535ebdf1385bd2aa287073fdc22b5feb291333ff875058695251177a6a81c13a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722254017113574428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blx4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892d6ac2-66bd-4af0-9bca-7018e1d51c1b,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1520e4956aff08169cf1bf92e39d8bb7cf1776327f167b35c5aa180b1955ebea,PodSandboxId:631fe808ce547fcbba71bff39a599eab957a72293394190efa0571fd5fd23018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722254006000392888,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9849f4439601378b9024a3ea6af8212ca083ca359a0ba4e40a5f2b18a3d8179,PodSandboxId:8ab8865d8537fd1f1bd5e966229bbe99b029b8c4b673e357a49c19adb012c1aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722254006017690656,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21543e2f80b25c052d45594e6ac1871b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace2035e6f2a6d7753c6b1c97aa6f8e5cdd7ae57bd452cb7a025bac96d958763,PodSandboxId:e03ca293b0943ddff9ec42373e53b7dfc32976b0f10ce231fe7c9bb077bbcc15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722254005962907639,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c956e5ee1dc919e60090f2d6a4e35a7c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b405cd58267974debe4f0092d98b65ec7ee23d139828e582e2d05c34185dc81,PodSandboxId:8d2b1da13933da60421943cb10e1ae777d5594ceccf0f9d5bb19e8813c565a44,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722254005907468938,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068113bfdd7d96276ec1d1442aa31b21,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e605ca417408e170cea9d80ff78a3aa5e15c87e2e8fb47235fed33945f24a30,PodSandboxId:2993c2108d0ae0d24f39f8cd80087b3420f6c5336b1994c15da59e6e26bbbd93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722253722315783186,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ac0bf2c-2a57-485f-8b91-f92db9e7ca8d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.456517143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0751a90-fce5-4dcd-a6a2-ce8c70becc01 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.456619458Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0751a90-fce5-4dcd-a6a2-ce8c70becc01 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.457730648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b87b838d-8708-4edc-88ce-3fe5ecb9e39a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.458151775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254568458070048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b87b838d-8708-4edc-88ce-3fe5ecb9e39a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.458661975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f9906a6-153f-4add-93cf-72bd5071617b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.458727292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f9906a6-153f-4add-93cf-72bd5071617b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.458974451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee36092b45794c1091ae3dcb9c5c38bb131a59b45cde74b042de394332fbee,PodSandboxId:c1451dcfd7f415486db3ba9939b82683a2580d656f1beb23b5c145e828f5198e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254018747255452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afce5e3-3bcf-476d-9846-c57e98532d24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9b00ee071e357b4ca0b9d6ec91567f297a0fe207a61e6538b6eda02418b060,PodSandboxId:4f2629ba00a9ff413e7f662e1a69f65937b4e7e1dfb94567b7a0fa9c8ce509eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254018079880829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bnqrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2258a90-8b49-4cd3-9e84-6e3567ede3f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f49db7287541769a9c5841ac45bbca52be912aa3fd7d5bae1cc9d5d22542ced,PodSandboxId:5111f1fc15e65d39088fa90d640915618c1a226c55a4358f4a104db5ec2be201,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254017600495493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7n6s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
8e4916-ee1d-47ce-902b-7c6328514ca9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47520a7ce939c00172d33ae5b6db2c81fbb7301ec269f327baaf9097a15d8b5,PodSandboxId:535ebdf1385bd2aa287073fdc22b5feb291333ff875058695251177a6a81c13a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722254017113574428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blx4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892d6ac2-66bd-4af0-9bca-7018e1d51c1b,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1520e4956aff08169cf1bf92e39d8bb7cf1776327f167b35c5aa180b1955ebea,PodSandboxId:631fe808ce547fcbba71bff39a599eab957a72293394190efa0571fd5fd23018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722254006000392888,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9849f4439601378b9024a3ea6af8212ca083ca359a0ba4e40a5f2b18a3d8179,PodSandboxId:8ab8865d8537fd1f1bd5e966229bbe99b029b8c4b673e357a49c19adb012c1aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722254006017690656,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21543e2f80b25c052d45594e6ac1871b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace2035e6f2a6d7753c6b1c97aa6f8e5cdd7ae57bd452cb7a025bac96d958763,PodSandboxId:e03ca293b0943ddff9ec42373e53b7dfc32976b0f10ce231fe7c9bb077bbcc15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722254005962907639,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c956e5ee1dc919e60090f2d6a4e35a7c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b405cd58267974debe4f0092d98b65ec7ee23d139828e582e2d05c34185dc81,PodSandboxId:8d2b1da13933da60421943cb10e1ae777d5594ceccf0f9d5bb19e8813c565a44,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722254005907468938,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068113bfdd7d96276ec1d1442aa31b21,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e605ca417408e170cea9d80ff78a3aa5e15c87e2e8fb47235fed33945f24a30,PodSandboxId:2993c2108d0ae0d24f39f8cd80087b3420f6c5336b1994c15da59e6e26bbbd93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722253722315783186,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f9906a6-153f-4add-93cf-72bd5071617b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.492285149Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=051f543d-13f6-494c-b1d9-ccdc7ebd71fc name=/runtime.v1.RuntimeService/Version
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.492374952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=051f543d-13f6-494c-b1d9-ccdc7ebd71fc name=/runtime.v1.RuntimeService/Version
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.493923889Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ece4fbe5-b0f1-438e-957a-a5a9d8448f28 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.494537864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254568494511864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ece4fbe5-b0f1-438e-957a-a5a9d8448f28 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.494997166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc2c34c1-c1b2-4f15-a7e2-e3b85ecd466e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.495068956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc2c34c1-c1b2-4f15-a7e2-e3b85ecd466e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:02:48 no-preload-297799 crio[724]: time="2024-07-29 12:02:48.495325374Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee36092b45794c1091ae3dcb9c5c38bb131a59b45cde74b042de394332fbee,PodSandboxId:c1451dcfd7f415486db3ba9939b82683a2580d656f1beb23b5c145e828f5198e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254018747255452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afce5e3-3bcf-476d-9846-c57e98532d24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9b00ee071e357b4ca0b9d6ec91567f297a0fe207a61e6538b6eda02418b060,PodSandboxId:4f2629ba00a9ff413e7f662e1a69f65937b4e7e1dfb94567b7a0fa9c8ce509eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254018079880829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bnqrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2258a90-8b49-4cd3-9e84-6e3567ede3f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f49db7287541769a9c5841ac45bbca52be912aa3fd7d5bae1cc9d5d22542ced,PodSandboxId:5111f1fc15e65d39088fa90d640915618c1a226c55a4358f4a104db5ec2be201,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254017600495493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7n6s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
8e4916-ee1d-47ce-902b-7c6328514ca9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47520a7ce939c00172d33ae5b6db2c81fbb7301ec269f327baaf9097a15d8b5,PodSandboxId:535ebdf1385bd2aa287073fdc22b5feb291333ff875058695251177a6a81c13a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722254017113574428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blx4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892d6ac2-66bd-4af0-9bca-7018e1d51c1b,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1520e4956aff08169cf1bf92e39d8bb7cf1776327f167b35c5aa180b1955ebea,PodSandboxId:631fe808ce547fcbba71bff39a599eab957a72293394190efa0571fd5fd23018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722254006000392888,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9849f4439601378b9024a3ea6af8212ca083ca359a0ba4e40a5f2b18a3d8179,PodSandboxId:8ab8865d8537fd1f1bd5e966229bbe99b029b8c4b673e357a49c19adb012c1aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722254006017690656,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21543e2f80b25c052d45594e6ac1871b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace2035e6f2a6d7753c6b1c97aa6f8e5cdd7ae57bd452cb7a025bac96d958763,PodSandboxId:e03ca293b0943ddff9ec42373e53b7dfc32976b0f10ce231fe7c9bb077bbcc15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722254005962907639,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c956e5ee1dc919e60090f2d6a4e35a7c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b405cd58267974debe4f0092d98b65ec7ee23d139828e582e2d05c34185dc81,PodSandboxId:8d2b1da13933da60421943cb10e1ae777d5594ceccf0f9d5bb19e8813c565a44,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722254005907468938,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068113bfdd7d96276ec1d1442aa31b21,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e605ca417408e170cea9d80ff78a3aa5e15c87e2e8fb47235fed33945f24a30,PodSandboxId:2993c2108d0ae0d24f39f8cd80087b3420f6c5336b1994c15da59e6e26bbbd93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722253722315783186,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc2c34c1-c1b2-4f15-a7e2-e3b85ecd466e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10ee36092b457       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c1451dcfd7f41       storage-provisioner
	9d9b00ee071e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   4f2629ba00a9f       coredns-5cfdc65f69-bnqrr
	1f49db7287541       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   5111f1fc15e65       coredns-5cfdc65f69-7n6s7
	c47520a7ce939       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   535ebdf1385bd       kube-proxy-blx4g
	b9849f4439601       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   8ab8865d8537f       kube-scheduler-no-preload-297799
	1520e4956aff0       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   631fe808ce547       kube-apiserver-no-preload-297799
	ace2035e6f2a6       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   e03ca293b0943       kube-controller-manager-no-preload-297799
	7b405cd582679       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   8d2b1da13933d       etcd-no-preload-297799
	2e605ca417408       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   2993c2108d0ae       kube-apiserver-no-preload-297799
	
	
	==> coredns [1f49db7287541769a9c5841ac45bbca52be912aa3fd7d5bae1cc9d5d22542ced] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9d9b00ee071e357b4ca0b9d6ec91567f297a0fe207a61e6538b6eda02418b060] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-297799
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-297799
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=no-preload-297799
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_53_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:53:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-297799
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:02:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:58:49 +0000   Mon, 29 Jul 2024 11:53:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:58:49 +0000   Mon, 29 Jul 2024 11:53:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:58:49 +0000   Mon, 29 Jul 2024 11:53:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:58:49 +0000   Mon, 29 Jul 2024 11:53:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    no-preload-297799
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d5c091591c34b2b97ae36f53988a04d
	  System UUID:                7d5c0915-91c3-4b2b-97ae-36f53988a04d
	  Boot ID:                    6c6ddb4a-0129-452b-989d-c392393f37ba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-7n6s7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 coredns-5cfdc65f69-bnqrr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 etcd-no-preload-297799                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-no-preload-297799             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-no-preload-297799    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-blx4g                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                 kube-scheduler-no-preload-297799             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-78fcd8795b-vxjvd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node no-preload-297799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node no-preload-297799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node no-preload-297799 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node no-preload-297799 event: Registered Node no-preload-297799 in Controller
	
	
	==> dmesg <==
	[  +0.041196] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.165941] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.681317] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.594233] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.440389] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.067447] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062707] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.169751] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.153169] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.299997] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +15.292596] systemd-fstab-generator[1174]: Ignoring "noauto" option for root device
	[  +0.066209] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.902442] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +3.884564] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.253377] kauditd_printk_skb: 57 callbacks suppressed
	[Jul29 11:49] kauditd_printk_skb: 28 callbacks suppressed
	[Jul29 11:53] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.836367] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +4.460357] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.104556] systemd-fstab-generator[3276]: Ignoring "noauto" option for root device
	[  +5.412695] systemd-fstab-generator[3392]: Ignoring "noauto" option for root device
	[  +0.051433] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.210878] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [7b405cd58267974debe4f0092d98b65ec7ee23d139828e582e2d05c34185dc81] <==
	{"level":"info","ts":"2024-07-29T11:53:26.223189Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T11:53:26.223281Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T11:53:26.223294Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T11:53:26.223779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 switched to configuration voters=(12622623832313748944)"}
	{"level":"info","ts":"2024-07-29T11:53:26.223919Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f3de5e1602edc73b","local-member-id":"af2c917f7a70ddd0","added-peer-id":"af2c917f7a70ddd0","added-peer-peer-urls":["https://192.168.39.120:2380"]}
	{"level":"info","ts":"2024-07-29T11:53:26.673161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T11:53:26.673224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T11:53:26.673253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 received MsgPreVoteResp from af2c917f7a70ddd0 at term 1"}
	{"level":"info","ts":"2024-07-29T11:53:26.673268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:53:26.673273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 received MsgVoteResp from af2c917f7a70ddd0 at term 2"}
	{"level":"info","ts":"2024-07-29T11:53:26.673289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T11:53:26.673296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: af2c917f7a70ddd0 elected leader af2c917f7a70ddd0 at term 2"}
	{"level":"info","ts":"2024-07-29T11:53:26.677304Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"af2c917f7a70ddd0","local-member-attributes":"{Name:no-preload-297799 ClientURLs:[https://192.168.39.120:2379]}","request-path":"/0/members/af2c917f7a70ddd0/attributes","cluster-id":"f3de5e1602edc73b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:53:26.677441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:53:26.677614Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:53:26.677807Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:53:26.686728Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T11:53:26.686868Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:53:26.686961Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:53:26.688147Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T11:53:26.696624Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.120:2379"}
	{"level":"info","ts":"2024-07-29T11:53:26.69704Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f3de5e1602edc73b","local-member-id":"af2c917f7a70ddd0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:53:26.706372Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:53:26.706507Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:53:26.710714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:02:48 up 14 min,  0 users,  load average: 0.11, 0.15, 0.12
	Linux no-preload-297799 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1520e4956aff08169cf1bf92e39d8bb7cf1776327f167b35c5aa180b1955ebea] <==
	W0729 11:58:29.816377       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 11:58:29.816427       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 11:58:29.817554       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 11:58:29.817620       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 11:59:29.818404       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 11:59:29.818522       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0729 11:59:29.818682       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 11:59:29.818865       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 11:59:29.819747       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 11:59:29.820831       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:01:29.820773       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 12:01:29.820889       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0729 12:01:29.821002       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 12:01:29.821115       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 12:01:29.822065       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 12:01:29.822223       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [2e605ca417408e170cea9d80ff78a3aa5e15c87e2e8fb47235fed33945f24a30] <==
	W0729 11:53:21.985642       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:21.991397       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.017518       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.039702       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.074317       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.095691       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.108619       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.171295       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.178856       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.185594       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.186958       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.197508       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.262350       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.265851       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.280617       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.300158       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.337259       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.350010       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.423606       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.453450       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.477415       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.552579       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.563574       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.718806       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.777630       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [ace2035e6f2a6d7753c6b1c97aa6f8e5cdd7ae57bd452cb7a025bac96d958763] <==
	E0729 11:57:36.718472       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 11:57:36.748763       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:58:06.726366       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 11:58:06.758600       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:58:36.734828       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 11:58:36.768323       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 11:58:49.095768       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-297799"
	E0729 11:59:06.741931       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 11:59:06.776831       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 11:59:36.749236       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 11:59:36.785552       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 11:59:36.811565       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="300.946µs"
	I0729 11:59:50.809661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="283.302µs"
	E0729 12:00:06.757191       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:00:06.801626       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:00:36.764501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:00:36.810059       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:01:06.771633       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:01:06.818327       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:01:36.779512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:01:36.826247       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:02:06.786583       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:02:06.838316       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:02:36.793587       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:02:36.846733       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c47520a7ce939c00172d33ae5b6db2c81fbb7301ec269f327baaf9097a15d8b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 11:53:37.658814       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 11:53:37.698616       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.120"]
	E0729 11:53:37.698696       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 11:53:37.792392       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 11:53:37.792422       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:53:37.792453       1 server_linux.go:170] "Using iptables Proxier"
	I0729 11:53:37.862154       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 11:53:37.862406       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 11:53:37.862417       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:53:37.864052       1 config.go:197] "Starting service config controller"
	I0729 11:53:37.864136       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:53:37.864170       1 config.go:104] "Starting endpoint slice config controller"
	I0729 11:53:37.864177       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:53:37.864813       1 config.go:326] "Starting node config controller"
	I0729 11:53:37.864820       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:53:37.966212       1 shared_informer.go:320] Caches are synced for node config
	I0729 11:53:37.966260       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:53:37.966280       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b9849f4439601378b9024a3ea6af8212ca083ca359a0ba4e40a5f2b18a3d8179] <==
	W0729 11:53:29.720438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:53:29.720555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.775342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:53:29.775792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.776065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:53:29.776190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.889483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:53:29.889585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.932286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:53:29.933655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.953299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:53:29.953412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.987184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:53:29.987716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:30.052218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:53:30.052432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:30.086323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:53:30.086370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:30.116768       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:53:30.116854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:30.142940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:53:30.143030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:30.232502       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:53:30.232562       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0729 11:53:32.636551       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:00:31 no-preload-297799 kubelet[3283]: E0729 12:00:31.857847    3283 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:00:31 no-preload-297799 kubelet[3283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:00:31 no-preload-297799 kubelet[3283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:00:31 no-preload-297799 kubelet[3283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:00:31 no-preload-297799 kubelet[3283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:00:42 no-preload-297799 kubelet[3283]: E0729 12:00:42.792837    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:00:54 no-preload-297799 kubelet[3283]: E0729 12:00:54.793616    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:01:06 no-preload-297799 kubelet[3283]: E0729 12:01:06.792673    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:01:18 no-preload-297799 kubelet[3283]: E0729 12:01:18.792382    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:01:29 no-preload-297799 kubelet[3283]: E0729 12:01:29.791731    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:01:31 no-preload-297799 kubelet[3283]: E0729 12:01:31.866845    3283 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:01:31 no-preload-297799 kubelet[3283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:01:31 no-preload-297799 kubelet[3283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:01:31 no-preload-297799 kubelet[3283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:01:31 no-preload-297799 kubelet[3283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:01:44 no-preload-297799 kubelet[3283]: E0729 12:01:44.792010    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:01:59 no-preload-297799 kubelet[3283]: E0729 12:01:59.791778    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:02:12 no-preload-297799 kubelet[3283]: E0729 12:02:12.792413    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:02:27 no-preload-297799 kubelet[3283]: E0729 12:02:27.793067    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:02:31 no-preload-297799 kubelet[3283]: E0729 12:02:31.858160    3283 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:02:31 no-preload-297799 kubelet[3283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:02:31 no-preload-297799 kubelet[3283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:02:31 no-preload-297799 kubelet[3283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:02:31 no-preload-297799 kubelet[3283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:02:41 no-preload-297799 kubelet[3283]: E0729 12:02:41.795500    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	
	
	==> storage-provisioner [10ee36092b45794c1091ae3dcb9c5c38bb131a59b45cde74b042de394332fbee] <==
	I0729 11:53:38.861735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:53:38.874931       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:53:38.875052       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:53:38.902843       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:53:38.903436       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-297799_95e9fa6c-2c45-456e-913f-3f4c61b05e4a!
	I0729 11:53:38.914201       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f8fbb97-e00d-4237-a520-406fd1ced5fc", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-297799_95e9fa6c-2c45-456e-913f-3f4c61b05e4a became leader
	I0729 11:53:39.004815       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-297799_95e9fa6c-2c45-456e-913f-3f4c61b05e4a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-297799 -n no-preload-297799
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-297799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-vxjvd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-297799 describe pod metrics-server-78fcd8795b-vxjvd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-297799 describe pod metrics-server-78fcd8795b-vxjvd: exit status 1 (62.787244ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-vxjvd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-297799 describe pod metrics-server-78fcd8795b-vxjvd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:56:20.496763   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:56:59.718692   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:57:06.187727   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:57:13.095144   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:57:43.541634   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:57:57.607753   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:58:00.969898   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:58:03.511214   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:58:19.769513   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:58:29.234819   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:58:36.138719   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:59:20.652891   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:59:23.690805   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:59:42.813863   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 11:59:57.915586   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:00:36.673031   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:00:46.736515   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:01:20.497585   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:02:06.187407   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:02:13.094546   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:02:57.607408   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:03:03.511377   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:03:19.769871   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:04:23.691011   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:04:57.915669   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188043 -n old-k8s-version-188043
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 2 (231.991904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-188043" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 2 (233.762966ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-188043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-188043 logs -n 25: (1.68230983s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184479 sudo cat                              | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo find                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo crio                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184479                                       | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-574387 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | disable-driver-mounts-574387                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:41 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-297799             | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-297799                                   | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-731235            | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC | 29 Jul 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-754486  | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC | 29 Jul 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-297799                  | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-297799 --memory=2200                     | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC | 29 Jul 24 11:53 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-188043        | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-731235                 | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC | 29 Jul 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754486       | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:53 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-188043             | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:44:35.218497   70480 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:44:35.218591   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218595   70480 out.go:304] Setting ErrFile to fd 2...
	I0729 11:44:35.218599   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218821   70480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:44:35.219357   70480 out.go:298] Setting JSON to false
	I0729 11:44:35.220249   70480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5221,"bootTime":1722248254,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:44:35.220304   70480 start.go:139] virtualization: kvm guest
	I0729 11:44:35.222361   70480 out.go:177] * [old-k8s-version-188043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:44:35.223849   70480 notify.go:220] Checking for updates...
	I0729 11:44:35.223857   70480 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:44:35.225068   70480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:44:35.226167   70480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:44:35.227448   70480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:44:35.228516   70480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:44:35.229771   70480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:44:35.231218   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:44:35.231601   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.231634   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.246457   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0729 11:44:35.246861   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.247335   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.247371   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.247712   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.247899   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.249642   70480 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 11:44:35.250805   70480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:44:35.251114   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.251152   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.265900   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0729 11:44:35.266309   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.266767   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.266794   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.267099   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.267293   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.302182   70480 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:44:35.303575   70480 start.go:297] selected driver: kvm2
	I0729 11:44:35.303593   70480 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.303689   70480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:44:35.304334   70480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.304396   70480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:44:35.319674   70480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:44:35.320023   70480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:44:35.320054   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:44:35.320061   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:44:35.320111   70480 start.go:340] cluster config:
	{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.320210   70480 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.322089   70480 out.go:177] * Starting "old-k8s-version-188043" primary control-plane node in "old-k8s-version-188043" cluster
	I0729 11:44:38.643004   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:35.323173   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:44:35.323208   70480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 11:44:35.323221   70480 cache.go:56] Caching tarball of preloaded images
	I0729 11:44:35.323282   70480 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:44:35.323291   70480 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 11:44:35.323390   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:44:35.323557   70480 start.go:360] acquireMachinesLock for old-k8s-version-188043: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:44:41.714983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:47.794983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:50.867015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:56.946962   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:00.019017   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:06.099000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:09.171008   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:15.250989   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:18.322956   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:24.403015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:27.474951   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:33.554944   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:36.627002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:42.706993   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:45.779000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:51.858998   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:54.931013   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:01.011021   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:04.082938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:10.162988   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:13.235043   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:19.314994   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:22.386953   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:28.467078   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:31.539011   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:37.618990   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:40.690995   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:46.770999   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:49.842938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:55.923002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:58.994960   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:47:01.999190   69907 start.go:364] duration metric: took 3m42.920247555s to acquireMachinesLock for "embed-certs-731235"
	I0729 11:47:01.999237   69907 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:01.999244   69907 fix.go:54] fixHost starting: 
	I0729 11:47:01.999548   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:01.999574   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:02.014481   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I0729 11:47:02.014934   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:02.015374   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:47:02.015392   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:02.015726   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:02.015911   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:02.016062   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:47:02.017570   69907 fix.go:112] recreateIfNeeded on embed-certs-731235: state=Stopped err=<nil>
	I0729 11:47:02.017606   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	W0729 11:47:02.017758   69907 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:02.020459   69907 out.go:177] * Restarting existing kvm2 VM for "embed-certs-731235" ...
	I0729 11:47:02.021770   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Start
	I0729 11:47:02.021904   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring networks are active...
	I0729 11:47:02.022551   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network default is active
	I0729 11:47:02.022943   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network mk-embed-certs-731235 is active
	I0729 11:47:02.023347   69907 main.go:141] libmachine: (embed-certs-731235) Getting domain xml...
	I0729 11:47:02.023972   69907 main.go:141] libmachine: (embed-certs-731235) Creating domain...
	I0729 11:47:03.233906   69907 main.go:141] libmachine: (embed-certs-731235) Waiting to get IP...
	I0729 11:47:03.234807   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.235200   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.235266   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.235191   70997 retry.go:31] will retry after 267.737911ms: waiting for machine to come up
	I0729 11:47:03.504861   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.505460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.505485   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.505418   70997 retry.go:31] will retry after 246.310337ms: waiting for machine to come up
	I0729 11:47:03.753068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.753558   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.753587   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.753520   70997 retry.go:31] will retry after 374.497339ms: waiting for machine to come up
	I0729 11:47:01.996514   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:01.996575   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.996873   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:47:01.996897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.997094   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:47:01.999070   69419 machine.go:97] duration metric: took 4m37.426222817s to provisionDockerMachine
	I0729 11:47:01.999113   69419 fix.go:56] duration metric: took 4m37.448019985s for fixHost
	I0729 11:47:01.999122   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 4m37.448042995s
	W0729 11:47:01.999140   69419 start.go:714] error starting host: provision: host is not running
	W0729 11:47:01.999247   69419 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 11:47:01.999257   69419 start.go:729] Will try again in 5 seconds ...
	I0729 11:47:04.130170   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.130603   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.130625   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.130557   70997 retry.go:31] will retry after 500.810762ms: waiting for machine to come up
	I0729 11:47:04.632773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.633142   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.633196   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.633094   70997 retry.go:31] will retry after 499.805121ms: waiting for machine to come up
	I0729 11:47:05.135101   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.135685   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.135714   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.135610   70997 retry.go:31] will retry after 713.805425ms: waiting for machine to come up
	I0729 11:47:05.850525   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.850950   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.850979   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.850918   70997 retry.go:31] will retry after 940.40593ms: waiting for machine to come up
	I0729 11:47:06.792982   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:06.793406   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:06.793433   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:06.793344   70997 retry.go:31] will retry after 1.216752167s: waiting for machine to come up
	I0729 11:47:08.012264   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:08.012748   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:08.012773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:08.012692   70997 retry.go:31] will retry after 1.729849311s: waiting for machine to come up
	I0729 11:47:07.000812   69419 start.go:360] acquireMachinesLock for no-preload-297799: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:47:09.743735   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:09.744125   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:09.744144   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:09.744101   70997 retry.go:31] will retry after 2.251271574s: waiting for machine to come up
	I0729 11:47:11.998663   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:11.999213   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:11.999255   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:11.999163   70997 retry.go:31] will retry after 2.400718693s: waiting for machine to come up
	I0729 11:47:14.401005   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:14.401419   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:14.401442   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:14.401352   70997 retry.go:31] will retry after 3.073847413s: waiting for machine to come up
	I0729 11:47:17.477026   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:17.477424   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:17.477460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:17.477352   70997 retry.go:31] will retry after 3.28522497s: waiting for machine to come up
	I0729 11:47:22.076091   70231 start.go:364] duration metric: took 3m11.794715554s to acquireMachinesLock for "default-k8s-diff-port-754486"
	I0729 11:47:22.076162   70231 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:22.076177   70231 fix.go:54] fixHost starting: 
	I0729 11:47:22.076605   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:22.076644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:22.096370   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45159
	I0729 11:47:22.096731   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:22.097267   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:47:22.097296   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:22.097603   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:22.097812   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:22.097983   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:47:22.099583   70231 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754486: state=Stopped err=<nil>
	I0729 11:47:22.099607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	W0729 11:47:22.099762   70231 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:22.101982   70231 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754486" ...
	I0729 11:47:20.766989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767519   69907 main.go:141] libmachine: (embed-certs-731235) Found IP for machine: 192.168.61.202
	I0729 11:47:20.767544   69907 main.go:141] libmachine: (embed-certs-731235) Reserving static IP address...
	I0729 11:47:20.767560   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has current primary IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767996   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.768025   69907 main.go:141] libmachine: (embed-certs-731235) DBG | skip adding static IP to network mk-embed-certs-731235 - found existing host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"}
	I0729 11:47:20.768043   69907 main.go:141] libmachine: (embed-certs-731235) Reserved static IP address: 192.168.61.202
	I0729 11:47:20.768060   69907 main.go:141] libmachine: (embed-certs-731235) Waiting for SSH to be available...
	I0729 11:47:20.768068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Getting to WaitForSSH function...
	I0729 11:47:20.770325   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770639   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.770667   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770863   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH client type: external
	I0729 11:47:20.770894   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa (-rw-------)
	I0729 11:47:20.770927   69907 main.go:141] libmachine: (embed-certs-731235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:20.770943   69907 main.go:141] libmachine: (embed-certs-731235) DBG | About to run SSH command:
	I0729 11:47:20.770960   69907 main.go:141] libmachine: (embed-certs-731235) DBG | exit 0
	I0729 11:47:20.895074   69907 main.go:141] libmachine: (embed-certs-731235) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:20.895473   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetConfigRaw
	I0729 11:47:20.896121   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:20.898342   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.898673   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.898717   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.899017   69907 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/config.json ...
	I0729 11:47:20.899239   69907 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:20.899262   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:20.899464   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:20.901688   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902056   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.902099   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902249   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:20.902412   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902579   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902715   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:20.902857   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:20.903102   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:20.903118   69907 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:21.007368   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:21.007403   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007682   69907 buildroot.go:166] provisioning hostname "embed-certs-731235"
	I0729 11:47:21.007708   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007928   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.010883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011268   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.011308   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011465   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.011634   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011779   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011950   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.012121   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.012314   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.012334   69907 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-731235 && echo "embed-certs-731235" | sudo tee /etc/hostname
	I0729 11:47:21.129877   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-731235
	
	I0729 11:47:21.129907   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.133055   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133390   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.133411   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133614   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.133806   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.133977   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.134156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.134317   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.134480   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.134495   69907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-731235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-731235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-731235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:21.247997   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:21.248029   69907 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:21.248056   69907 buildroot.go:174] setting up certificates
	I0729 11:47:21.248067   69907 provision.go:84] configureAuth start
	I0729 11:47:21.248075   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.248361   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:21.251377   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251711   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.251738   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251908   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.254107   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254493   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.254521   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254721   69907 provision.go:143] copyHostCerts
	I0729 11:47:21.254788   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:21.254801   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:21.254896   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:21.255008   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:21.255019   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:21.255058   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:21.255138   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:21.255148   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:21.255183   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:21.255257   69907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.embed-certs-731235 san=[127.0.0.1 192.168.61.202 embed-certs-731235 localhost minikube]
	I0729 11:47:21.398780   69907 provision.go:177] copyRemoteCerts
	I0729 11:47:21.398833   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:21.398858   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.401840   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402259   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.402282   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402483   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.402661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.402992   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.403139   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.484883   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 11:47:21.509042   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:47:21.532327   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:21.556013   69907 provision.go:87] duration metric: took 307.934726ms to configureAuth
	I0729 11:47:21.556040   69907 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:21.556258   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:21.556337   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.558962   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559347   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.559372   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559518   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.559699   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.559861   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.560004   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.560157   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.560337   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.560356   69907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:21.834240   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:21.834270   69907 machine.go:97] duration metric: took 935.015622ms to provisionDockerMachine
	I0729 11:47:21.834284   69907 start.go:293] postStartSetup for "embed-certs-731235" (driver="kvm2")
	I0729 11:47:21.834299   69907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:21.834325   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:21.834638   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:21.834671   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.837313   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837712   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.837751   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837857   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.838022   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.838229   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.838357   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.922275   69907 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:21.926932   69907 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:21.926955   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:21.927027   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:21.927136   69907 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:21.927219   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:21.937122   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:21.964493   69907 start.go:296] duration metric: took 130.192874ms for postStartSetup
	I0729 11:47:21.964533   69907 fix.go:56] duration metric: took 19.965288806s for fixHost
	I0729 11:47:21.964554   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.967318   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967652   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.967682   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967850   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.968066   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968222   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968356   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.968509   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.968717   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.968731   69907 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:22.075873   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253642.050121254
	
	I0729 11:47:22.075893   69907 fix.go:216] guest clock: 1722253642.050121254
	I0729 11:47:22.075900   69907 fix.go:229] Guest: 2024-07-29 11:47:22.050121254 +0000 UTC Remote: 2024-07-29 11:47:21.964537244 +0000 UTC m=+243.027106048 (delta=85.58401ms)
	I0729 11:47:22.075927   69907 fix.go:200] guest clock delta is within tolerance: 85.58401ms
	I0729 11:47:22.075933   69907 start.go:83] releasing machines lock for "embed-certs-731235", held for 20.076714897s
	I0729 11:47:22.075958   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.076265   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:22.079236   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079566   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.079604   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079771   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080311   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080491   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080573   69907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:22.080644   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.080719   69907 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:22.080743   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.083401   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083438   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083743   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083904   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083917   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084061   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084378   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084389   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084565   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084573   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.084691   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.188025   69907 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:22.194866   69907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:22.344382   69907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:22.350719   69907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:22.350809   69907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:22.371783   69907 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:22.371814   69907 start.go:495] detecting cgroup driver to use...
	I0729 11:47:22.371874   69907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:22.387899   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:22.401722   69907 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:22.401790   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:22.415295   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:22.429209   69907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:22.541230   69907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:22.705734   69907 docker.go:233] disabling docker service ...
	I0729 11:47:22.705811   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:22.720716   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:22.736719   69907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:22.865574   69907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:22.994470   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:23.018115   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:23.037125   69907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:23.037210   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.048702   69907 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:23.048768   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.061785   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.074734   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.087639   69907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:23.101010   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.113893   69907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.134264   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.147422   69907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:23.158168   69907 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:23.158220   69907 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:23.175245   69907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:23.190456   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:23.314426   69907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:23.459513   69907 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:23.459584   69907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:23.464829   69907 start.go:563] Will wait 60s for crictl version
	I0729 11:47:23.464899   69907 ssh_runner.go:195] Run: which crictl
	I0729 11:47:23.468768   69907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:23.508694   69907 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:23.508811   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.537048   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.569189   69907 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:23.570566   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:23.573554   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.573918   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:23.573946   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.574198   69907 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:23.578543   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:23.591660   69907 kubeadm.go:883] updating cluster {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:23.591803   69907 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:23.591862   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:23.629355   69907 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:23.629423   69907 ssh_runner.go:195] Run: which lz4
	I0729 11:47:23.633713   69907 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:23.638463   69907 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:23.638491   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:22.103288   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Start
	I0729 11:47:22.103502   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring networks are active...
	I0729 11:47:22.104291   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network default is active
	I0729 11:47:22.104576   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network mk-default-k8s-diff-port-754486 is active
	I0729 11:47:22.105037   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Getting domain xml...
	I0729 11:47:22.105746   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Creating domain...
	I0729 11:47:23.370011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting to get IP...
	I0729 11:47:23.370892   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371318   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.371249   71147 retry.go:31] will retry after 303.24713ms: waiting for machine to come up
	I0729 11:47:23.675985   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676540   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.676486   71147 retry.go:31] will retry after 332.87749ms: waiting for machine to come up
	I0729 11:47:24.010822   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011360   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011388   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.011312   71147 retry.go:31] will retry after 465.260924ms: waiting for machine to come up
	I0729 11:47:24.477939   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478471   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478517   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.478431   71147 retry.go:31] will retry after 501.294487ms: waiting for machine to come up
	I0729 11:47:24.981168   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981736   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.981647   71147 retry.go:31] will retry after 522.082731ms: waiting for machine to come up
	I0729 11:47:25.165725   69907 crio.go:462] duration metric: took 1.532044107s to copy over tarball
	I0729 11:47:25.165811   69907 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:27.422770   69907 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256906507s)
	I0729 11:47:27.422807   69907 crio.go:469] duration metric: took 2.257052359s to extract the tarball
	I0729 11:47:27.422817   69907 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:27.460807   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:27.509129   69907 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:27.509157   69907 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:27.509166   69907 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.30.3 crio true true} ...
	I0729 11:47:27.509281   69907 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-731235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:27.509347   69907 ssh_runner.go:195] Run: crio config
	I0729 11:47:27.560098   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:27.560121   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:27.560133   69907 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:27.560152   69907 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-731235 NodeName:embed-certs-731235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:27.560290   69907 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-731235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:27.560345   69907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:27.570464   69907 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:27.570555   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:27.580535   69907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 11:47:27.598211   69907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:27.615318   69907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 11:47:27.632974   69907 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:27.636858   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:27.649277   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:27.763642   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:27.781529   69907 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235 for IP: 192.168.61.202
	I0729 11:47:27.781556   69907 certs.go:194] generating shared ca certs ...
	I0729 11:47:27.781577   69907 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:27.781758   69907 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:27.781812   69907 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:27.781825   69907 certs.go:256] generating profile certs ...
	I0729 11:47:27.781950   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/client.key
	I0729 11:47:27.782036   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key.6ae4b4bc
	I0729 11:47:27.782091   69907 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key
	I0729 11:47:27.782234   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:27.782278   69907 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:27.782291   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:27.782323   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:27.782358   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:27.782388   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:27.782440   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:27.783361   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:27.813522   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:27.841190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:27.877646   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:27.919310   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 11:47:27.952080   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:47:27.985958   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:28.010190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:28.034756   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:28.059541   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:28.083582   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:28.113030   69907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:28.133424   69907 ssh_runner.go:195] Run: openssl version
	I0729 11:47:28.139250   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:28.150142   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154885   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154934   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.160995   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:28.172031   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:28.184289   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189071   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189132   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.194963   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:28.205555   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:28.216328   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221023   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221091   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.227053   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:28.238044   69907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:28.242748   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:28.248989   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:28.255165   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:28.261178   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:28.266997   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:28.272966   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:28.278994   69907 kubeadm.go:392] StartCluster: {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:28.279100   69907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:28.279142   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.317620   69907 cri.go:89] found id: ""
	I0729 11:47:28.317701   69907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:28.328260   69907 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:28.328285   69907 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:28.328365   69907 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:28.338356   69907 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:28.339293   69907 kubeconfig.go:125] found "embed-certs-731235" server: "https://192.168.61.202:8443"
	I0729 11:47:28.341224   69907 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:28.351166   69907 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0729 11:47:28.351203   69907 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:28.351215   69907 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:28.351271   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.393883   69907 cri.go:89] found id: ""
	I0729 11:47:28.393986   69907 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:28.411298   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:28.421328   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:28.421362   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:28.421406   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:47:28.430665   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:28.430746   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:28.440426   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:47:28.450406   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:28.450466   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:28.460200   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.469699   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:28.469771   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.479855   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:47:28.489251   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:28.489346   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:28.499019   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:28.508770   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:28.644277   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:25.505636   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506255   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:25.506195   71147 retry.go:31] will retry after 748.410801ms: waiting for machine to come up
	I0729 11:47:26.255894   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256293   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:26.256252   71147 retry.go:31] will retry after 1.1735659s: waiting for machine to come up
	I0729 11:47:27.430990   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431494   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:27.431400   71147 retry.go:31] will retry after 1.448031075s: waiting for machine to come up
	I0729 11:47:28.880998   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881483   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:28.881413   71147 retry.go:31] will retry after 1.123855306s: waiting for machine to come up
	I0729 11:47:30.006750   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007231   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007261   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:30.007176   71147 retry.go:31] will retry after 2.180202817s: waiting for machine to come up
	I0729 11:47:30.200484   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.556171661s)
	I0729 11:47:30.200515   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.427523   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.499256   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.603274   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:30.603360   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.104293   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.603524   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.621119   69907 api_server.go:72] duration metric: took 1.01784341s to wait for apiserver process to appear ...
	I0729 11:47:31.621152   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:31.621173   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:31.621755   69907 api_server.go:269] stopped: https://192.168.61.202:8443/healthz: Get "https://192.168.61.202:8443/healthz": dial tcp 192.168.61.202:8443: connect: connection refused
	I0729 11:47:32.121931   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:32.188652   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189149   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189200   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:32.189120   71147 retry.go:31] will retry after 2.231222575s: waiting for machine to come up
	I0729 11:47:34.421672   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422102   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422130   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:34.422062   71147 retry.go:31] will retry after 2.830311758s: waiting for machine to come up
	I0729 11:47:34.187391   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.187427   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.187450   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.199953   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.199994   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.621483   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.639389   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:34.639423   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.121653   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.130808   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.130843   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.621391   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.626072   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.626116   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.122245   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.126823   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.126851   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.621364   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.625781   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.625810   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.121848   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.126505   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:37.126537   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.622175   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.628241   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:47:37.634638   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:37.634668   69907 api_server.go:131] duration metric: took 6.013509305s to wait for apiserver health ...
	I0729 11:47:37.634677   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:37.634684   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:37.636740   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:37.638144   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:37.649816   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:37.670562   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:37.680377   69907 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:37.680408   69907 system_pods.go:61] "coredns-7db6d8ff4d-kwx89" [f2a3fdcb-2794-470e-a1b4-fe264fb5613a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:37.680414   69907 system_pods.go:61] "etcd-embed-certs-731235" [a99bcf99-7242-4383-aa2d-597e817004db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:37.680421   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [302c4cda-07d4-46ec-af59-3339a2b91049] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:37.680426   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [dae9ef32-63c1-4865-9569-ea1f11c9526c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:37.680430   69907 system_pods.go:61] "kube-proxy-hw66r" [97610503-7ca0-4d0c-8d73-249f2a48ef9a] Running
	I0729 11:47:37.680436   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [144902be-bea5-493c-986d-3834c22d82d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:37.680445   69907 system_pods.go:61] "metrics-server-569cc877fc-vqgtm" [75d59d71-3fb3-4383-bd90-3362f6b40694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:37.680449   69907 system_pods.go:61] "storage-provisioner" [24f74df4-0657-481b-9af8-f8b5c94684ea] Running
	I0729 11:47:37.680454   69907 system_pods.go:74] duration metric: took 9.870611ms to wait for pod list to return data ...
	I0729 11:47:37.680460   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:37.683573   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:37.683595   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:37.683607   69907 node_conditions.go:105] duration metric: took 3.142611ms to run NodePressure ...
	I0729 11:47:37.683626   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:37.964162   69907 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968288   69907 kubeadm.go:739] kubelet initialised
	I0729 11:47:37.968308   69907 kubeadm.go:740] duration metric: took 4.123333ms waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968316   69907 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:37.972978   69907 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.977070   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977088   69907 pod_ready.go:81] duration metric: took 4.090197ms for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.977097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977102   69907 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.981499   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981535   69907 pod_ready.go:81] duration metric: took 4.424741ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.981543   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981550   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.986064   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986084   69907 pod_ready.go:81] duration metric: took 4.52445ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.986097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986103   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.254312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254680   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254757   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:37.254658   71147 retry.go:31] will retry after 3.980350875s: waiting for machine to come up
	I0729 11:47:42.625107   70480 start.go:364] duration metric: took 3m7.301517115s to acquireMachinesLock for "old-k8s-version-188043"
	I0729 11:47:42.625180   70480 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:42.625189   70480 fix.go:54] fixHost starting: 
	I0729 11:47:42.625660   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:42.625704   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:42.644136   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
	I0729 11:47:42.644661   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:42.645210   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:47:42.645242   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:42.645570   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:42.645748   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:47:42.645875   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetState
	I0729 11:47:42.647762   70480 fix.go:112] recreateIfNeeded on old-k8s-version-188043: state=Stopped err=<nil>
	I0729 11:47:42.647808   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	W0729 11:47:42.647970   70480 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:42.649883   70480 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-188043" ...
	I0729 11:47:39.992010   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:41.992091   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:43.494150   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.494177   69907 pod_ready.go:81] duration metric: took 5.508061336s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.494186   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500158   69907 pod_ready.go:92] pod "kube-proxy-hw66r" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.500186   69907 pod_ready.go:81] duration metric: took 5.992092ms for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500198   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:41.239616   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240073   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Found IP for machine: 192.168.50.111
	I0729 11:47:41.240103   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has current primary IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240110   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserving static IP address...
	I0729 11:47:41.240474   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.240501   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserved static IP address: 192.168.50.111
	I0729 11:47:41.240529   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | skip adding static IP to network mk-default-k8s-diff-port-754486 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"}
	I0729 11:47:41.240549   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Getting to WaitForSSH function...
	I0729 11:47:41.240567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for SSH to be available...
	I0729 11:47:41.242523   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.242938   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.242970   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.243112   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH client type: external
	I0729 11:47:41.243140   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa (-rw-------)
	I0729 11:47:41.243171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:41.243185   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | About to run SSH command:
	I0729 11:47:41.243198   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | exit 0
	I0729 11:47:41.366827   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:41.367268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetConfigRaw
	I0729 11:47:41.367885   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.370241   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370574   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.370605   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370867   70231 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/config.json ...
	I0729 11:47:41.371157   70231 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:41.371184   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:41.371408   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.374380   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374770   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.374805   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374920   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.375098   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375245   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375362   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.375555   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.375784   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.375801   70231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:41.479220   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:41.479262   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479528   70231 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754486"
	I0729 11:47:41.479555   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479744   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.482542   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.482869   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.482903   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.483074   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.483282   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483442   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483611   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.483828   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.484029   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.484048   70231 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754486 && echo "default-k8s-diff-port-754486" | sudo tee /etc/hostname
	I0729 11:47:41.605605   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754486
	
	I0729 11:47:41.605639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.608313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.608698   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608910   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.609126   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609498   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.609650   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.609845   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.609862   70231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754486/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:41.724183   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:41.724209   70231 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:41.724237   70231 buildroot.go:174] setting up certificates
	I0729 11:47:41.724246   70231 provision.go:84] configureAuth start
	I0729 11:47:41.724256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.724530   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.727462   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.727826   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.727858   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.728009   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.730256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.730683   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730768   70231 provision.go:143] copyHostCerts
	I0729 11:47:41.730822   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:41.730835   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:41.730904   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:41.731016   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:41.731026   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:41.731047   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:41.731151   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:41.731161   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:41.731179   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:41.731238   70231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754486 san=[127.0.0.1 192.168.50.111 default-k8s-diff-port-754486 localhost minikube]
	I0729 11:47:41.930044   70231 provision.go:177] copyRemoteCerts
	I0729 11:47:41.930097   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:41.930127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.932832   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933158   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.933186   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933378   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.933565   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.933723   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.933848   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.016885   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:42.042982   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 11:47:42.067813   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:47:42.092573   70231 provision.go:87] duration metric: took 368.315812ms to configureAuth
	I0729 11:47:42.092601   70231 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:42.092761   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:42.092829   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.095761   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096177   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.096223   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096349   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.096571   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096751   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096891   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.097056   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.097234   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.097251   70231 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:42.378448   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:42.378478   70231 machine.go:97] duration metric: took 1.007302295s to provisionDockerMachine
	I0729 11:47:42.378495   70231 start.go:293] postStartSetup for "default-k8s-diff-port-754486" (driver="kvm2")
	I0729 11:47:42.378511   70231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:42.378541   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.378917   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:42.378950   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.382127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382539   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.382567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382759   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.382958   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.383171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.383297   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.467524   70231 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:42.471793   70231 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:42.471815   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:42.471873   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:42.471948   70231 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:42.472033   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:42.482148   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:42.507312   70231 start.go:296] duration metric: took 128.801138ms for postStartSetup
	I0729 11:47:42.507358   70231 fix.go:56] duration metric: took 20.43118839s for fixHost
	I0729 11:47:42.507384   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.510309   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510737   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.510769   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510948   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.511195   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511373   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511537   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.511694   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.511844   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.511853   70231 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:42.624913   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253662.599486483
	
	I0729 11:47:42.624946   70231 fix.go:216] guest clock: 1722253662.599486483
	I0729 11:47:42.624960   70231 fix.go:229] Guest: 2024-07-29 11:47:42.599486483 +0000 UTC Remote: 2024-07-29 11:47:42.507363501 +0000 UTC m=+212.369750509 (delta=92.122982ms)
	I0729 11:47:42.624988   70231 fix.go:200] guest clock delta is within tolerance: 92.122982ms
	I0729 11:47:42.625005   70231 start.go:83] releasing machines lock for "default-k8s-diff-port-754486", held for 20.548870778s
	I0729 11:47:42.625050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.625322   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:42.628299   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.628799   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.628834   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.629011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629659   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629860   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629950   70231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:42.629997   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.630087   70231 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:42.630106   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.633122   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633432   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633464   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.633504   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633890   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.633973   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.634044   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.634088   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.634312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.634387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634489   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.634906   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.635039   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.746128   70231 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:42.754711   70231 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:42.906989   70231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:42.913975   70231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:42.914035   70231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:42.931503   70231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:42.931535   70231 start.go:495] detecting cgroup driver to use...
	I0729 11:47:42.931591   70231 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:42.949385   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:42.965940   70231 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:42.965989   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:42.982952   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:43.000214   70231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:43.123333   70231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:43.266557   70231 docker.go:233] disabling docker service ...
	I0729 11:47:43.266637   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:43.282521   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:43.300091   70231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:43.440721   70231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:43.577985   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:43.598070   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:43.620282   70231 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:43.620343   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.633918   70231 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:43.634064   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.644931   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.660559   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.676307   70231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:43.687970   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.699659   70231 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.718571   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.729820   70231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:43.739921   70231 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:43.740010   70231 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:43.755562   70231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:43.768161   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:43.899531   70231 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:44.057564   70231 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:44.057649   70231 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:44.062669   70231 start.go:563] Will wait 60s for crictl version
	I0729 11:47:44.062751   70231 ssh_runner.go:195] Run: which crictl
	I0729 11:47:44.066815   70231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:44.104368   70231 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:44.104469   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.133158   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.165813   70231 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:44.167192   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:44.170230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170633   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:44.170664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170908   70231 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:44.175609   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:44.188628   70231 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:44.188748   70231 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:44.188811   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:44.229180   70231 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:44.229255   70231 ssh_runner.go:195] Run: which lz4
	I0729 11:47:44.233985   70231 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:44.238236   70231 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:44.238276   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:42.651248   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .Start
	I0729 11:47:42.651431   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring networks are active...
	I0729 11:47:42.652248   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network default is active
	I0729 11:47:42.652632   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network mk-old-k8s-version-188043 is active
	I0729 11:47:42.653108   70480 main.go:141] libmachine: (old-k8s-version-188043) Getting domain xml...
	I0729 11:47:42.653902   70480 main.go:141] libmachine: (old-k8s-version-188043) Creating domain...
	I0729 11:47:43.961872   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting to get IP...
	I0729 11:47:43.962928   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:43.963321   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:43.963424   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:43.963311   71329 retry.go:31] will retry after 266.698669ms: waiting for machine to come up
	I0729 11:47:44.231914   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.232480   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.232507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.232428   71329 retry.go:31] will retry after 363.884342ms: waiting for machine to come up
	I0729 11:47:44.598046   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.598507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.598530   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.598465   71329 retry.go:31] will retry after 486.214401ms: waiting for machine to come up
	I0729 11:47:45.085968   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.086466   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.086511   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.086440   71329 retry.go:31] will retry after 451.181437ms: waiting for machine to come up
	I0729 11:47:44.508165   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:44.508190   69907 pod_ready.go:81] duration metric: took 1.007982605s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:44.508199   69907 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:46.515466   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:48.515797   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:45.761961   70231 crio.go:462] duration metric: took 1.528001524s to copy over tarball
	I0729 11:47:45.762103   70231 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:48.135637   70231 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.373497372s)
	I0729 11:47:48.135673   70231 crio.go:469] duration metric: took 2.373677697s to extract the tarball
	I0729 11:47:48.135683   70231 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:48.173007   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:48.222120   70231 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:48.222146   70231 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:48.222156   70231 kubeadm.go:934] updating node { 192.168.50.111 8444 v1.30.3 crio true true} ...
	I0729 11:47:48.222294   70231 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:48.222372   70231 ssh_runner.go:195] Run: crio config
	I0729 11:47:48.269094   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:48.269122   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:48.269149   70231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:48.269175   70231 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.111 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754486 NodeName:default-k8s-diff-port-754486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:48.269394   70231 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754486"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:48.269469   70231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:48.282748   70231 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:48.282830   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:48.292857   70231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 11:47:48.312165   70231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:48.332206   70231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 11:47:48.350385   70231 ssh_runner.go:195] Run: grep 192.168.50.111	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:48.354603   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:48.368166   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:48.505072   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:48.525399   70231 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486 for IP: 192.168.50.111
	I0729 11:47:48.525436   70231 certs.go:194] generating shared ca certs ...
	I0729 11:47:48.525457   70231 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:48.525622   70231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:48.525678   70231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:48.525691   70231 certs.go:256] generating profile certs ...
	I0729 11:47:48.525783   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/client.key
	I0729 11:47:48.525863   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key.0ed2faa3
	I0729 11:47:48.525927   70231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key
	I0729 11:47:48.526076   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:48.526124   70231 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:48.526138   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:48.526169   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:48.526211   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:48.526241   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:48.526289   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:48.527026   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:48.567953   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:48.605538   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:48.639615   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:48.678439   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 11:47:48.722664   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:47:48.757436   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:48.797241   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:48.825666   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:48.856344   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:48.882046   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:48.909963   70231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:48.928513   70231 ssh_runner.go:195] Run: openssl version
	I0729 11:47:48.934467   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:48.945606   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950533   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950585   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.957222   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:48.969043   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:48.981101   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986095   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986161   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.992153   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:49.004358   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:49.016204   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021070   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021131   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.027503   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:49.038545   70231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:49.043602   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:49.050327   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:49.056648   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:49.063624   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:49.071491   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:49.080125   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:49.086622   70231 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:49.086771   70231 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:49.086845   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.131483   70231 cri.go:89] found id: ""
	I0729 11:47:49.131580   70231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:49.143222   70231 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:49.143246   70231 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:49.143296   70231 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:49.155447   70231 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:49.156410   70231 kubeconfig.go:125] found "default-k8s-diff-port-754486" server: "https://192.168.50.111:8444"
	I0729 11:47:49.158477   70231 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:49.171515   70231 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.111
	I0729 11:47:49.171546   70231 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:49.171558   70231 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:49.171614   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.218584   70231 cri.go:89] found id: ""
	I0729 11:47:49.218656   70231 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:49.237934   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:49.249188   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:49.249213   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:49.249276   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:47:49.260033   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:49.260100   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:49.270588   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:47:49.280326   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:49.280422   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:49.291754   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.301918   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:49.302005   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.312861   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:47:49.323545   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:49.323614   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:49.335556   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:49.347161   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:49.467792   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:45.538886   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.539448   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.539474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.539387   71329 retry.go:31] will retry after 730.083235ms: waiting for machine to come up
	I0729 11:47:46.270923   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.271428   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.271457   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.271383   71329 retry.go:31] will retry after 699.351116ms: waiting for machine to come up
	I0729 11:47:46.973033   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.973661   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.973689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.973606   71329 retry.go:31] will retry after 804.992714ms: waiting for machine to come up
	I0729 11:47:47.780217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:47.780676   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:47.780727   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:47.780651   71329 retry.go:31] will retry after 1.242092613s: waiting for machine to come up
	I0729 11:47:49.024835   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:49.025362   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:49.025384   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:49.025314   71329 retry.go:31] will retry after 1.505477936s: waiting for machine to come up
	I0729 11:47:51.014115   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:53.015922   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:50.213363   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.427510   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.489221   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.574558   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:50.574648   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.075420   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.574892   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.612604   70231 api_server.go:72] duration metric: took 1.038045496s to wait for apiserver process to appear ...
	I0729 11:47:51.612635   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:51.612656   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:51.613131   70231 api_server.go:269] stopped: https://192.168.50.111:8444/healthz: Get "https://192.168.50.111:8444/healthz": dial tcp 192.168.50.111:8444: connect: connection refused
	I0729 11:47:52.113045   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.008828   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.008861   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.008877   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.080000   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.080047   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.113269   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.123263   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.123301   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:50.532816   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:50.533325   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:50.533354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:50.533285   71329 retry.go:31] will retry after 1.877421564s: waiting for machine to come up
	I0729 11:47:52.413018   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:52.413474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:52.413500   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:52.413405   71329 retry.go:31] will retry after 2.017909532s: waiting for machine to come up
	I0729 11:47:54.432996   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:54.433478   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:54.433506   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:54.433444   71329 retry.go:31] will retry after 2.469357423s: waiting for machine to come up
	I0729 11:47:55.612793   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.617264   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.617299   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.112811   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.119382   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:56.119410   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.612944   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.617383   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:47:56.623760   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:56.623786   70231 api_server.go:131] duration metric: took 5.011145377s to wait for apiserver health ...
	I0729 11:47:56.623795   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:56.623801   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:56.625608   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:55.018201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:57.514432   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.626901   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:56.638585   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:56.661631   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:56.671881   70231 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:56.671908   70231 system_pods.go:61] "coredns-7db6d8ff4d-d4frq" [e495bc30-3c10-4d07-b488-4dbe9b0bfb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:56.671916   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [de3378a8-9a12-4c4b-a6e6-61b19950d5a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:56.671924   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [36c2cd1b-d9de-463e-b343-661d5f14f4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:56.671934   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [6239a1ee-9f7d-4d9b-9d70-5659c7b08fbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:56.671941   70231 system_pods.go:61] "kube-proxy-4bbt5" [4e672275-1afe-4f11-80e2-62aa220e9994] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:47:56.671947   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [81b7d1ed-0163-43fb-8111-048d48efa13c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:56.671954   70231 system_pods.go:61] "metrics-server-569cc877fc-v94xq" [a34d0cd0-1049-4cb4-ae4b-d0c8d34fda13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:56.671959   70231 system_pods.go:61] "storage-provisioner" [a10d68bf-f23d-4871-9041-1e66aa089342] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:47:56.671967   70231 system_pods.go:74] duration metric: took 10.316696ms to wait for pod list to return data ...
	I0729 11:47:56.671974   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:56.677342   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:56.677368   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:56.677380   70231 node_conditions.go:105] duration metric: took 5.400925ms to run NodePressure ...
	I0729 11:47:56.677400   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:56.985230   70231 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990270   70231 kubeadm.go:739] kubelet initialised
	I0729 11:47:56.990297   70231 kubeadm.go:740] duration metric: took 5.038002ms waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990307   70231 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:56.995626   70231 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.002678   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002729   70231 pod_ready.go:81] duration metric: took 7.079039ms for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.002742   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002749   70231 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.007474   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007500   70231 pod_ready.go:81] duration metric: took 4.741617ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.007510   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007516   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.012437   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012464   70231 pod_ready.go:81] duration metric: took 4.941759ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.012474   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012480   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.065060   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065103   70231 pod_ready.go:81] duration metric: took 52.614137ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.065124   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065133   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465390   70231 pod_ready.go:92] pod "kube-proxy-4bbt5" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:57.465414   70231 pod_ready.go:81] duration metric: took 400.26956ms for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465423   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:59.475067   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.904201   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:56.904719   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:56.904754   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:56.904677   71329 retry.go:31] will retry after 4.224733299s: waiting for machine to come up
	I0729 11:48:02.473126   69419 start.go:364] duration metric: took 55.472263119s to acquireMachinesLock for "no-preload-297799"
	I0729 11:48:02.473181   69419 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:48:02.473195   69419 fix.go:54] fixHost starting: 
	I0729 11:48:02.473581   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:48:02.473611   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:48:02.491458   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0729 11:48:02.491939   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:48:02.492393   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:48:02.492411   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:48:02.492790   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:48:02.492983   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:02.493133   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:48:02.494640   69419 fix.go:112] recreateIfNeeded on no-preload-297799: state=Stopped err=<nil>
	I0729 11:48:02.494666   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	W0729 11:48:02.494878   69419 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:48:02.497014   69419 out.go:177] * Restarting existing kvm2 VM for "no-preload-297799" ...
	I0729 11:47:59.514514   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.515573   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.516078   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.132270   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132669   70480 main.go:141] libmachine: (old-k8s-version-188043) Found IP for machine: 192.168.72.61
	I0729 11:48:01.132695   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has current primary IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132702   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserving static IP address...
	I0729 11:48:01.133140   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.133173   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | skip adding static IP to network mk-old-k8s-version-188043 - found existing host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"}
	I0729 11:48:01.133189   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserved static IP address: 192.168.72.61
	I0729 11:48:01.133203   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting for SSH to be available...
	I0729 11:48:01.133217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Getting to WaitForSSH function...
	I0729 11:48:01.135427   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135759   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.135786   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135866   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH client type: external
	I0729 11:48:01.135894   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa (-rw-------)
	I0729 11:48:01.135931   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:01.135949   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | About to run SSH command:
	I0729 11:48:01.135986   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | exit 0
	I0729 11:48:01.262756   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:01.263165   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetConfigRaw
	I0729 11:48:01.263828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.266322   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266662   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.266689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266930   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:48:01.267115   70480 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:01.267132   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:01.267357   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.269973   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270331   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.270354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270502   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.270686   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270863   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.271183   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.271391   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.271405   70480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:01.383463   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:01.383492   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383771   70480 buildroot.go:166] provisioning hostname "old-k8s-version-188043"
	I0729 11:48:01.383795   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.387076   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387411   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.387449   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387583   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.387776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.387929   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.388052   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.388237   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.388396   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.388409   70480 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-188043 && echo "old-k8s-version-188043" | sudo tee /etc/hostname
	I0729 11:48:01.519219   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-188043
	
	I0729 11:48:01.519252   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.521972   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522356   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.522385   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522533   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.522755   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.522955   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.523074   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.523276   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.523452   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.523470   70480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-188043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-188043/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-188043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:01.644387   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:01.644421   70480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:01.644468   70480 buildroot.go:174] setting up certificates
	I0729 11:48:01.644480   70480 provision.go:84] configureAuth start
	I0729 11:48:01.644499   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.644781   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.647721   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648133   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.648162   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648322   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.650422   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.650857   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.650883   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.651028   70480 provision.go:143] copyHostCerts
	I0729 11:48:01.651088   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:01.651101   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:01.651160   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:01.651249   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:01.651257   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:01.651277   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:01.651329   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:01.651336   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:01.651352   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:01.651408   70480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-188043 san=[127.0.0.1 192.168.72.61 localhost minikube old-k8s-version-188043]
	I0729 11:48:01.754387   70480 provision.go:177] copyRemoteCerts
	I0729 11:48:01.754442   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:01.754468   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.757420   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.757770   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.757803   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.758031   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.758220   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.758416   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.758574   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:01.845306   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:01.872221   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 11:48:01.897935   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:01.924742   70480 provision.go:87] duration metric: took 280.248018ms to configureAuth
	I0729 11:48:01.924780   70480 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:01.924957   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:48:01.925042   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.927450   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.927780   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.927949   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.927956   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.928160   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928344   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928511   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.928677   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.928831   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.928850   70480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:02.213344   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:02.213375   70480 machine.go:97] duration metric: took 946.247614ms to provisionDockerMachine
	I0729 11:48:02.213405   70480 start.go:293] postStartSetup for "old-k8s-version-188043" (driver="kvm2")
	I0729 11:48:02.213422   70480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:02.213469   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.213811   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:02.213869   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.216897   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217219   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.217253   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217388   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.217603   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.217776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.217957   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.305894   70480 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:02.310875   70480 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:02.310907   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:02.310982   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:02.311105   70480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:02.311264   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:02.321616   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:02.349732   70480 start.go:296] duration metric: took 136.310898ms for postStartSetup
	I0729 11:48:02.349788   70480 fix.go:56] duration metric: took 19.724598498s for fixHost
	I0729 11:48:02.349828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.352855   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353194   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.353226   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353348   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.353575   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353802   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353983   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.354172   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:02.354428   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:02.354445   70480 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:02.472983   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253682.444336856
	
	I0729 11:48:02.473008   70480 fix.go:216] guest clock: 1722253682.444336856
	I0729 11:48:02.473017   70480 fix.go:229] Guest: 2024-07-29 11:48:02.444336856 +0000 UTC Remote: 2024-07-29 11:48:02.349793034 +0000 UTC m=+207.164903125 (delta=94.543822ms)
	I0729 11:48:02.473042   70480 fix.go:200] guest clock delta is within tolerance: 94.543822ms
	I0729 11:48:02.473049   70480 start.go:83] releasing machines lock for "old-k8s-version-188043", held for 19.847898477s
	I0729 11:48:02.473077   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.473414   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:02.476555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.476980   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.477010   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.477343   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.477892   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478095   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478187   70480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:02.478230   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.478285   70480 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:02.478331   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.481065   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481223   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481484   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481626   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.481665   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481705   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481845   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.481958   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.482032   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482114   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.482302   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482316   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.482466   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.569312   70480 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:02.593778   70480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:02.744117   70480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:02.752292   70480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:02.752380   70480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:02.774488   70480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:02.774515   70480 start.go:495] detecting cgroup driver to use...
	I0729 11:48:02.774581   70480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:02.793491   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:02.814235   70480 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:02.814294   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:02.832545   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:02.848433   70480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:02.980201   70480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:03.163341   70480 docker.go:233] disabling docker service ...
	I0729 11:48:03.163420   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:03.178758   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:03.197879   70480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:03.331372   70480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:03.462987   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:03.480583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:03.504239   70480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 11:48:03.504312   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.518440   70480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:03.518494   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.532072   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.543435   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.555232   70480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:03.568372   70480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:03.582423   70480 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:03.582482   70480 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:03.601891   70480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:03.614380   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:03.741008   70480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:03.896807   70480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:03.896878   70480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:03.903541   70480 start.go:563] Will wait 60s for crictl version
	I0729 11:48:03.903604   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:03.908408   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:03.952850   70480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:03.952946   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:03.984204   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:04.018650   70480 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 11:48:02.498447   69419 main.go:141] libmachine: (no-preload-297799) Calling .Start
	I0729 11:48:02.498626   69419 main.go:141] libmachine: (no-preload-297799) Ensuring networks are active...
	I0729 11:48:02.499540   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network default is active
	I0729 11:48:02.499967   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network mk-no-preload-297799 is active
	I0729 11:48:02.500446   69419 main.go:141] libmachine: (no-preload-297799) Getting domain xml...
	I0729 11:48:02.501250   69419 main.go:141] libmachine: (no-preload-297799) Creating domain...
	I0729 11:48:03.852498   69419 main.go:141] libmachine: (no-preload-297799) Waiting to get IP...
	I0729 11:48:03.853523   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:03.853951   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:03.854006   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:03.853917   71505 retry.go:31] will retry after 199.060788ms: waiting for machine to come up
	I0729 11:48:04.054348   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.054940   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.054968   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.054888   71505 retry.go:31] will retry after 285.962971ms: waiting for machine to come up
	I0729 11:48:04.342491   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.343050   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.343075   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.343003   71505 retry.go:31] will retry after 363.613745ms: waiting for machine to come up
	I0729 11:48:01.973091   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.972466   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:03.972492   70231 pod_ready.go:81] duration metric: took 6.507061375s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:03.972504   70231 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:04.020067   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:04.023182   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023542   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:04.023571   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023796   70480 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:04.028450   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:04.042324   70480 kubeadm.go:883] updating cluster {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:04.042474   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:48:04.042540   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:04.092644   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:04.092699   70480 ssh_runner.go:195] Run: which lz4
	I0729 11:48:04.096834   70480 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:48:04.101297   70480 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:48:04.101328   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 11:48:05.518740   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:08.014306   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:04.708829   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.709447   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.709480   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.709349   71505 retry.go:31] will retry after 458.384125ms: waiting for machine to come up
	I0729 11:48:05.169214   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.169896   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.169930   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.169845   71505 retry.go:31] will retry after 647.103993ms: waiting for machine to come up
	I0729 11:48:05.818415   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.819017   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.819043   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.818969   71505 retry.go:31] will retry after 857.973397ms: waiting for machine to come up
	I0729 11:48:06.678181   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:06.678732   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:06.678756   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:06.678668   71505 retry.go:31] will retry after 928.705904ms: waiting for machine to come up
	I0729 11:48:07.609326   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:07.609866   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:07.609890   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:07.609822   71505 retry.go:31] will retry after 1.262269934s: waiting for machine to come up
	I0729 11:48:08.874373   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:08.874820   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:08.874850   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:08.874758   71505 retry.go:31] will retry after 1.824043731s: waiting for machine to come up
	I0729 11:48:05.980579   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:07.982513   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:05.929023   70480 crio.go:462] duration metric: took 1.832213096s to copy over tarball
	I0729 11:48:05.929116   70480 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:48:09.096321   70480 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.167180641s)
	I0729 11:48:09.096346   70480 crio.go:469] duration metric: took 3.16729016s to extract the tarball
	I0729 11:48:09.096353   70480 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:48:09.154049   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:09.193067   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:09.193104   70480 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:09.193246   70480 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.193211   70480 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.193280   70480 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 11:48:09.193282   70480 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.193298   70480 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.193217   70480 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.193270   70480 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.193261   70480 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.194797   70480 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194815   70480 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.194846   70480 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.194885   70480 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 11:48:09.194789   70480 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.195165   70480 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.412658   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 11:48:09.423832   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.465209   70480 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 11:48:09.465256   70480 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 11:48:09.465312   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.475896   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 11:48:09.475946   70480 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 11:48:09.475983   70480 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.476028   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.511579   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.511606   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 11:48:09.547233   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 11:48:09.554773   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.566944   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.567736   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.574369   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.587705   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.650871   70480 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 11:48:09.650920   70480 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.650969   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692828   70480 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 11:48:09.692876   70480 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 11:48:09.692913   70480 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.692884   70480 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.692968   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692989   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.711985   70480 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 11:48:09.712027   70480 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.712073   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.713847   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.713883   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.713918   70480 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 11:48:09.713950   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.713959   70480 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.713997   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.718719   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.820952   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 11:48:09.820988   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 11:48:09.821051   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.821058   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 11:48:09.821142   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 11:48:09.861235   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 11:48:10.107019   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:10.014549   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.016206   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.701733   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:10.702238   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:10.702283   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:10.702199   71505 retry.go:31] will retry after 2.128592394s: waiting for machine to come up
	I0729 11:48:12.832803   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:12.833342   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:12.833364   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:12.833290   71505 retry.go:31] will retry after 2.45224359s: waiting for machine to come up
	I0729 11:48:10.479461   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.482426   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:14.978814   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.251097   70480 cache_images.go:92] duration metric: took 1.057971213s to LoadCachedImages
	W0729 11:48:10.251194   70480 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0729 11:48:10.251210   70480 kubeadm.go:934] updating node { 192.168.72.61 8443 v1.20.0 crio true true} ...
	I0729 11:48:10.251341   70480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-188043 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:10.251447   70480 ssh_runner.go:195] Run: crio config
	I0729 11:48:10.310573   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:48:10.310594   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:10.310601   70480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:10.310618   70480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.61 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-188043 NodeName:old-k8s-version-188043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 11:48:10.310786   70480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-188043"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:10.310847   70480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 11:48:10.321378   70480 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:10.321463   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:10.331285   70480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 11:48:10.348814   70480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:48:10.368593   70480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 11:48:10.387619   70480 ssh_runner.go:195] Run: grep 192.168.72.61	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:10.392096   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:10.405499   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:10.532293   70480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:10.549883   70480 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043 for IP: 192.168.72.61
	I0729 11:48:10.549910   70480 certs.go:194] generating shared ca certs ...
	I0729 11:48:10.549930   70480 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:10.550110   70480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:10.550166   70480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:10.550180   70480 certs.go:256] generating profile certs ...
	I0729 11:48:10.550299   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.key
	I0729 11:48:10.550376   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4
	I0729 11:48:10.550428   70480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key
	I0729 11:48:10.550564   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:10.550604   70480 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:10.550617   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:10.550648   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:10.550678   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:10.550730   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:10.550787   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:10.551421   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:10.588571   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:10.644840   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:10.697757   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:10.755085   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 11:48:10.800901   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:10.834650   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:10.866662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:10.895657   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:10.923565   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:10.948662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:10.976094   70480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:10.998391   70480 ssh_runner.go:195] Run: openssl version
	I0729 11:48:11.006024   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:11.019080   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024428   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024498   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.031468   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:11.043919   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:11.055933   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061208   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061284   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.067590   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:11.079323   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:11.091404   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096768   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096838   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.103571   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:11.116286   70480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:11.121569   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:11.128426   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:11.134679   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:11.141595   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:11.148605   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:11.155402   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:11.162290   70480 kubeadm.go:392] StartCluster: {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:11.162394   70480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:11.162441   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.206880   70480 cri.go:89] found id: ""
	I0729 11:48:11.206962   70480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:11.218154   70480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:11.218187   70480 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:11.218253   70480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:11.228734   70480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:11.229980   70480 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:48:11.230942   70480 kubeconfig.go:62] /home/jenkins/minikube-integration/19337-3845/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-188043" cluster setting kubeconfig missing "old-k8s-version-188043" context setting]
	I0729 11:48:11.231876   70480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:11.347256   70480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:11.358515   70480 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.61
	I0729 11:48:11.358654   70480 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:11.358668   70480 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:11.358753   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.402396   70480 cri.go:89] found id: ""
	I0729 11:48:11.402470   70480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:11.420623   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:11.431963   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:11.431989   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:11.432060   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:11.442517   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:11.442584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:11.454407   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:11.465534   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:11.465607   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:11.477716   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.489553   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:11.489625   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.501776   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:11.514863   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:11.514931   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:11.526206   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:11.536583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:11.674846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:12.763352   70480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.088463809s)
	I0729 11:48:12.763396   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.039246   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.168621   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.295910   70480 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:13.296008   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:13.797084   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.296523   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.796520   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.515092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:17.014806   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.287937   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:15.288420   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:15.288447   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:15.288378   71505 retry.go:31] will retry after 2.298011171s: waiting for machine to come up
	I0729 11:48:17.587882   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:17.588283   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:17.588317   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:17.588242   71505 retry.go:31] will retry after 3.770149633s: waiting for machine to come up
	I0729 11:48:16.979006   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:18.979673   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.296251   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:15.796738   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.296361   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.796755   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.296237   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.796188   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.296099   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.796433   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.296864   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.796931   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.514721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.515056   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.515218   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.363217   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363766   69419 main.go:141] libmachine: (no-preload-297799) Found IP for machine: 192.168.39.120
	I0729 11:48:21.363823   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has current primary IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363832   69419 main.go:141] libmachine: (no-preload-297799) Reserving static IP address...
	I0729 11:48:21.364272   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.364319   69419 main.go:141] libmachine: (no-preload-297799) DBG | skip adding static IP to network mk-no-preload-297799 - found existing host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"}
	I0729 11:48:21.364334   69419 main.go:141] libmachine: (no-preload-297799) Reserved static IP address: 192.168.39.120
	I0729 11:48:21.364351   69419 main.go:141] libmachine: (no-preload-297799) Waiting for SSH to be available...
	I0729 11:48:21.364386   69419 main.go:141] libmachine: (no-preload-297799) DBG | Getting to WaitForSSH function...
	I0729 11:48:21.366601   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.366955   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.366998   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.367110   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH client type: external
	I0729 11:48:21.367157   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa (-rw-------)
	I0729 11:48:21.367203   69419 main.go:141] libmachine: (no-preload-297799) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:21.367222   69419 main.go:141] libmachine: (no-preload-297799) DBG | About to run SSH command:
	I0729 11:48:21.367233   69419 main.go:141] libmachine: (no-preload-297799) DBG | exit 0
	I0729 11:48:21.494963   69419 main.go:141] libmachine: (no-preload-297799) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:21.495323   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetConfigRaw
	I0729 11:48:21.495901   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.498624   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499005   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.499033   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499332   69419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/config.json ...
	I0729 11:48:21.499542   69419 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:21.499561   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:21.499749   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.501857   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502237   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.502259   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502360   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.502527   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502693   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502852   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.503009   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.503209   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.503226   69419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:21.614994   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:21.615026   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615271   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:48:21.615299   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615483   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.617734   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618050   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.618082   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618192   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.618378   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618539   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618640   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.618818   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.619004   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.619019   69419 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-297799 && echo "no-preload-297799" | sudo tee /etc/hostname
	I0729 11:48:21.747538   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-297799
	
	I0729 11:48:21.747567   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.750275   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750618   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.750649   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750791   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.751003   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751179   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751302   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.751508   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.751695   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.751716   69419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-297799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-297799/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-297799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:21.877638   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:21.877665   69419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:21.877688   69419 buildroot.go:174] setting up certificates
	I0729 11:48:21.877699   69419 provision.go:84] configureAuth start
	I0729 11:48:21.877710   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.877988   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.880318   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880703   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.880730   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880918   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.883184   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883495   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.883525   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883645   69419 provision.go:143] copyHostCerts
	I0729 11:48:21.883693   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:21.883702   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:21.883757   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:21.883845   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:21.883852   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:21.883872   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:21.883925   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:21.883932   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:21.883948   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:21.883992   69419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.no-preload-297799 san=[127.0.0.1 192.168.39.120 localhost minikube no-preload-297799]
	I0729 11:48:22.283775   69419 provision.go:177] copyRemoteCerts
	I0729 11:48:22.283828   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:22.283854   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.286584   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.286954   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.286981   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.287114   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.287333   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.287503   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.287643   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.373551   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:22.401345   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 11:48:22.427243   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:22.452826   69419 provision.go:87] duration metric: took 575.112676ms to configureAuth
	I0729 11:48:22.452864   69419 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:22.453068   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:48:22.453140   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.455748   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456205   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.456237   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456444   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.456664   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456824   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456980   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.457113   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.457317   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.457340   69419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:22.736637   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:22.736667   69419 machine.go:97] duration metric: took 1.237111694s to provisionDockerMachine
	I0729 11:48:22.736682   69419 start.go:293] postStartSetup for "no-preload-297799" (driver="kvm2")
	I0729 11:48:22.736697   69419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:22.736716   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.737054   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:22.737080   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.739895   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740266   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.740299   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740437   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.740660   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.740810   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.740981   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.825483   69419 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:22.829745   69419 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:22.829765   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:22.829844   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:22.829961   69419 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:22.830063   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:22.839702   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:22.864154   69419 start.go:296] duration metric: took 127.451011ms for postStartSetup
	I0729 11:48:22.864200   69419 fix.go:56] duration metric: took 20.391004348s for fixHost
	I0729 11:48:22.864225   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.867047   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867522   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.867547   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867685   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.867897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868100   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868278   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.868442   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.868619   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.868634   69419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:22.979862   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253702.953940258
	
	I0729 11:48:22.979883   69419 fix.go:216] guest clock: 1722253702.953940258
	I0729 11:48:22.979892   69419 fix.go:229] Guest: 2024-07-29 11:48:22.953940258 +0000 UTC Remote: 2024-07-29 11:48:22.864205522 +0000 UTC m=+358.454662216 (delta=89.734736ms)
	I0729 11:48:22.979909   69419 fix.go:200] guest clock delta is within tolerance: 89.734736ms
	I0729 11:48:22.979916   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 20.506763382s
	I0729 11:48:22.979934   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.980178   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:22.983034   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983379   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.983407   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983569   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984174   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984345   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984440   69419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:22.984481   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.984593   69419 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:22.984620   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.987121   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987251   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987503   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987530   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987631   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987653   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987657   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987846   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.987853   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987984   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.988013   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988070   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988193   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.988190   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:23.101778   69419 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:23.108052   69419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:23.255523   69419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:23.261797   69419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:23.261872   69419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:23.279975   69419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:23.280003   69419 start.go:495] detecting cgroup driver to use...
	I0729 11:48:23.280070   69419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:23.296550   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:23.312947   69419 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:23.313014   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:23.327611   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:23.341549   69419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:23.465776   69419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:23.613763   69419 docker.go:233] disabling docker service ...
	I0729 11:48:23.613827   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:23.628485   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:23.641792   69419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:23.775749   69419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:23.912809   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:23.927782   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:23.947081   69419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 11:48:23.947153   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.957920   69419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:23.958002   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.968380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.979429   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.990529   69419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:24.001380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.012490   69419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.031852   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.042914   69419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:24.052901   69419 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:24.052958   69419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:24.065797   69419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:24.075298   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:24.212796   69419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:24.364082   69419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:24.364169   69419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:24.369778   69419 start.go:563] Will wait 60s for crictl version
	I0729 11:48:24.369838   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.373750   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:24.417141   69419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:24.417232   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.447170   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.491940   69419 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 11:48:21.481453   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.482213   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:20.296052   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:20.796633   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.296412   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.797025   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.296524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.796719   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.296741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.796133   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.296709   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.796699   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.515715   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:27.515900   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:24.493306   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:24.495927   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496432   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:24.496479   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496678   69419 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:24.501092   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:24.516305   69419 kubeadm.go:883] updating cluster {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:24.516452   69419 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 11:48:24.516524   69419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:24.558195   69419 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 11:48:24.558221   69419 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:24.558261   69419 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.558295   69419 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.558340   69419 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.558344   69419 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.558377   69419 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 11:48:24.558394   69419 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.558441   69419 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.558359   69419 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 11:48:24.559657   69419 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.559681   69419 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.559700   69419 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.559628   69419 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.559635   69419 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.559896   69419 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.717545   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.722347   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.724891   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.736099   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.738159   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.746232   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 11:48:24.754163   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.781677   69419 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 11:48:24.781726   69419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.781777   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.850443   69419 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 11:48:24.850478   69419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.850527   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.872953   69419 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 11:48:24.872991   69419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.873031   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908765   69419 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 11:48:24.908814   69419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.908869   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908933   69419 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 11:48:24.908969   69419 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.909008   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006764   69419 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 11:48:25.006808   69419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.006862   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006897   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:25.006908   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:25.006942   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:25.006982   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:25.007025   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:25.108737   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 11:48:25.108786   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.108843   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.109411   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109455   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109473   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 11:48:25.109491   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109530   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109543   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:25.124038   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.124154   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.161374   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161395   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 11:48:25.161411   69419 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161435   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161455   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161483   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161495   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161463   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 11:48:25.161532   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 11:48:25.430934   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983350   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (3.821838647s)
	I0729 11:48:28.983392   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 11:48:28.983487   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.822003707s)
	I0729 11:48:28.983512   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 11:48:28.983529   69419 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.552560815s)
	I0729 11:48:28.983541   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983566   69419 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 11:48:28.983600   69419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983615   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983636   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.981755   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:28.481454   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:25.296898   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.796712   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.297094   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.796384   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.296247   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.296890   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.796799   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.296947   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.015895   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:32.537283   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.876700   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.893055249s)
	I0729 11:48:30.876727   69419 ssh_runner.go:235] Completed: which crictl: (1.893072604s)
	I0729 11:48:30.876791   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:30.876737   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 11:48:30.876867   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.876921   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.925907   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 11:48:30.926007   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:32.689310   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.812361674s)
	I0729 11:48:32.689348   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 11:48:32.689380   69419 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689330   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.763306985s)
	I0729 11:48:32.689433   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689437   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 11:48:30.979444   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:33.480260   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.297007   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.797055   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.296172   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.796379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.296834   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.796689   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.296129   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.796275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.297038   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.014380   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:37.015050   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:34.662663   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.973206225s)
	I0729 11:48:34.662715   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 11:48:34.662742   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:34.662794   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:36.619459   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.956638565s)
	I0729 11:48:36.619486   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 11:48:36.619509   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:36.619565   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:38.577482   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.95789492s)
	I0729 11:48:38.577507   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 11:48:38.577529   69419 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:38.577568   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:39.229623   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 11:48:39.229672   69419 cache_images.go:123] Successfully loaded all cached images
	I0729 11:48:39.229679   69419 cache_images.go:92] duration metric: took 14.67144672s to LoadCachedImages
	I0729 11:48:39.229693   69419 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.0-beta.0 crio true true} ...
	I0729 11:48:39.229817   69419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-297799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:39.229881   69419 ssh_runner.go:195] Run: crio config
	I0729 11:48:39.275907   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:39.275926   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:39.275934   69419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:39.275954   69419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-297799 NodeName:no-preload-297799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:48:39.276122   69419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-297799"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:39.276192   69419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 11:48:39.286552   69419 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:39.286610   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:39.296058   69419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 11:48:39.318154   69419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 11:48:39.335437   69419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 11:48:39.354036   69419 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:39.358009   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:39.370253   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:35.994913   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:38.483330   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:35.297061   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.796828   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.296156   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.796245   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.297045   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.796103   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.296453   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.796146   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.296670   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.796357   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.016488   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:41.515245   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:39.512699   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:39.531458   69419 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799 for IP: 192.168.39.120
	I0729 11:48:39.531482   69419 certs.go:194] generating shared ca certs ...
	I0729 11:48:39.531502   69419 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:39.531676   69419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:39.531730   69419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:39.531743   69419 certs.go:256] generating profile certs ...
	I0729 11:48:39.531841   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/client.key
	I0729 11:48:39.531928   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key.7b715e25
	I0729 11:48:39.531975   69419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key
	I0729 11:48:39.532117   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:39.532153   69419 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:39.532167   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:39.532197   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:39.532227   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:39.532258   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:39.532304   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:39.532940   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:39.571271   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:39.596824   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:39.622112   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:39.655054   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 11:48:39.693252   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:39.717845   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:39.746725   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:39.772098   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:39.798075   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:39.824675   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:39.849863   69419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:39.867759   69419 ssh_runner.go:195] Run: openssl version
	I0729 11:48:39.874159   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:39.885596   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890166   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890229   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.896413   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:39.907803   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:39.920270   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925216   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925279   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.931316   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:39.942774   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:39.954592   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959366   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959422   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.965437   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:39.976951   69419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:39.983054   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:39.989909   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:39.995930   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:40.002178   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:40.008426   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:40.014841   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:40.021729   69419 kubeadm.go:392] StartCluster: {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:40.021848   69419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:40.021908   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.075370   69419 cri.go:89] found id: ""
	I0729 11:48:40.075473   69419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:40.086268   69419 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:40.086293   69419 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:40.086367   69419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:40.097168   69419 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:40.098369   69419 kubeconfig.go:125] found "no-preload-297799" server: "https://192.168.39.120:8443"
	I0729 11:48:40.100676   69419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:40.111832   69419 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I0729 11:48:40.111874   69419 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:40.111885   69419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:40.111927   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.151936   69419 cri.go:89] found id: ""
	I0729 11:48:40.152000   69419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:40.170773   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:40.181342   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:40.181363   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:40.181408   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:40.190984   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:40.191052   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:40.200668   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:40.209597   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:40.209645   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:40.219194   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.228788   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:40.228861   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.238965   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:40.248308   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:40.248390   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:40.257904   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:40.267645   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:40.379761   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.272628   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.487426   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.563792   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.657159   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:41.657265   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.158209   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.657442   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.712325   69419 api_server.go:72] duration metric: took 1.055172636s to wait for apiserver process to appear ...
	I0729 11:48:42.712357   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:48:42.712378   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:40.978804   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:42.979615   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:40.296481   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:40.796161   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.296479   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.796634   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.296314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.796986   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.297060   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.296048   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.796488   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.619558   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.619623   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.619639   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.629929   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.629961   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.713181   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.764383   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.764415   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:46.213129   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.217584   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.217613   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:46.713358   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.719382   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.719421   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:47.212915   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:47.218414   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:48:47.230158   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:48:47.230187   69419 api_server.go:131] duration metric: took 4.517823741s to wait for apiserver health ...
	I0729 11:48:47.230197   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:47.230203   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:47.232409   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:48:44.015604   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:46.514213   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:48.514660   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.233803   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:48:47.254784   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:48:47.278258   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:48:47.307307   69419 system_pods.go:59] 8 kube-system pods found
	I0729 11:48:47.307354   69419 system_pods.go:61] "coredns-5cfdc65f69-qz5f7" [12c37abb-1db8-4c96-8bf7-be9487c821df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:48:47.307368   69419 system_pods.go:61] "etcd-no-preload-297799" [95565d29-e8c5-4f33-84d9-a2604d25440d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:48:47.307380   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [870e0ec0-87db-4fee-b8ba-d08654d036de] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:48:47.307389   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [12bf09f7-8084-47fb-b268-c9eccf906ce8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:48:47.307397   69419 system_pods.go:61] "kube-proxy-ggh4w" [5455f099-4470-4551-864e-5e855b77f94f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:48:47.307405   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [e88dae86-cfc6-456f-b14a-ebaaeac5f758] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:48:47.307416   69419 system_pods.go:61] "metrics-server-78fcd8795b-x4t76" [874f9fbe-8ded-48ba-993d-53cbded78379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:48:47.307423   69419 system_pods.go:61] "storage-provisioner" [8ca54feb-faf5-4e75-aef5-b7c57b89c429] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:48:47.307434   69419 system_pods.go:74] duration metric: took 29.153842ms to wait for pod list to return data ...
	I0729 11:48:47.307447   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:48:47.324625   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:48:47.324677   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:48:47.324691   69419 node_conditions.go:105] duration metric: took 17.237885ms to run NodePressure ...
	I0729 11:48:47.324711   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:47.612726   69419 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619335   69419 kubeadm.go:739] kubelet initialised
	I0729 11:48:47.619356   69419 kubeadm.go:740] duration metric: took 6.608982ms waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619364   69419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:48:47.625462   69419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:45.479610   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.481743   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.978596   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:45.297079   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.796411   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.297077   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.796676   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.296378   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.796359   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.296252   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.796407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.296669   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.796307   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.516689   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:53.016717   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.632321   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.131647   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.633099   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:52.633127   69419 pod_ready.go:81] duration metric: took 5.007638065s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.633136   69419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.480576   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.979758   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:50.296349   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.796922   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.296977   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.797050   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.296223   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.796774   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.296621   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.796300   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.296480   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.796902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.515017   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:57.515244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.640065   69419 pod_ready.go:102] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.648288   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.648318   69419 pod_ready.go:81] duration metric: took 4.015175534s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.648327   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.653979   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.654012   69419 pod_ready.go:81] duration metric: took 5.676586ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.654027   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664507   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.664533   69419 pod_ready.go:81] duration metric: took 10.499453ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664544   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669414   69419 pod_ready.go:92] pod "kube-proxy-ggh4w" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.669439   69419 pod_ready.go:81] duration metric: took 4.888994ms for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669449   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673888   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.673913   69419 pod_ready.go:81] duration metric: took 4.457007ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673924   69419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:58.682501   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.982680   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:59.479587   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:55.296128   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.796141   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.296196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.796435   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.296155   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.796741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.796190   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.296902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.797062   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.013753   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:02.014435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.180620   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.183481   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.481530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.978979   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:00.296074   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.796430   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.296402   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.796722   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.296594   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.297020   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.796865   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.296072   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.796318   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.015636   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:06.514933   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.681102   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.681462   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.979240   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.979773   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.979865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.296957   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:05.796485   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.296051   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.296457   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.796342   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.796449   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.297078   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.796713   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.014934   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:11.515032   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:13.515665   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.683191   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.181155   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.182012   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.482327   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.979064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:10.296959   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:10.796677   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.296975   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.796715   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.296262   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.796937   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:13.296939   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:13.297014   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:13.340009   70480 cri.go:89] found id: ""
	I0729 11:49:13.340034   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.340041   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:13.340047   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:13.340112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:13.378003   70480 cri.go:89] found id: ""
	I0729 11:49:13.378029   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.378037   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:13.378044   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:13.378098   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:13.417130   70480 cri.go:89] found id: ""
	I0729 11:49:13.417162   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.417169   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:13.417175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:13.417253   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:13.461969   70480 cri.go:89] found id: ""
	I0729 11:49:13.462001   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.462012   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:13.462019   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:13.462089   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:13.502341   70480 cri.go:89] found id: ""
	I0729 11:49:13.502369   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.502375   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:13.502382   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:13.502434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:13.538099   70480 cri.go:89] found id: ""
	I0729 11:49:13.538123   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.538137   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:13.538143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:13.538192   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:13.575136   70480 cri.go:89] found id: ""
	I0729 11:49:13.575165   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.575172   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:13.575180   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:13.575241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:13.609066   70480 cri.go:89] found id: ""
	I0729 11:49:13.609098   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.609106   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:13.609114   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:13.609126   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:13.622813   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:13.622842   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:13.765497   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:13.765519   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:13.765532   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:13.831517   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:13.831554   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:13.877738   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:13.877771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.015086   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:18.514995   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.683827   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.180229   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.979975   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.479362   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.429475   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:16.447454   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:16.447528   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:16.486991   70480 cri.go:89] found id: ""
	I0729 11:49:16.487018   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.487028   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:16.487036   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:16.487095   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:16.526155   70480 cri.go:89] found id: ""
	I0729 11:49:16.526180   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.526187   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:16.526192   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:16.526251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:16.562054   70480 cri.go:89] found id: ""
	I0729 11:49:16.562080   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.562156   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:16.562175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:16.562229   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:16.598866   70480 cri.go:89] found id: ""
	I0729 11:49:16.598896   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.598907   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:16.598915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:16.598984   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:16.637590   70480 cri.go:89] found id: ""
	I0729 11:49:16.637615   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.637623   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:16.637628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:16.637677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:16.676712   70480 cri.go:89] found id: ""
	I0729 11:49:16.676738   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.676749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:16.676756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:16.676844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:16.717213   70480 cri.go:89] found id: ""
	I0729 11:49:16.717242   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.717250   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:16.717256   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:16.717309   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:16.755619   70480 cri.go:89] found id: ""
	I0729 11:49:16.755644   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.755652   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:16.755660   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:16.755677   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:16.825987   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:16.826023   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:16.869351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:16.869386   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.920850   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:16.920888   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:16.935884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:16.935921   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:17.021524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.521810   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:19.534761   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:19.534826   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:19.571988   70480 cri.go:89] found id: ""
	I0729 11:49:19.572019   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.572029   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:19.572037   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:19.572097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:19.607389   70480 cri.go:89] found id: ""
	I0729 11:49:19.607418   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.607427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:19.607434   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:19.607496   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:19.645810   70480 cri.go:89] found id: ""
	I0729 11:49:19.645842   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.645853   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:19.645861   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:19.645924   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:19.683628   70480 cri.go:89] found id: ""
	I0729 11:49:19.683655   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.683663   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:19.683669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:19.683715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:19.719396   70480 cri.go:89] found id: ""
	I0729 11:49:19.719424   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.719435   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:19.719442   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:19.719503   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:19.755341   70480 cri.go:89] found id: ""
	I0729 11:49:19.755372   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.755383   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:19.755390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:19.755443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:19.792562   70480 cri.go:89] found id: ""
	I0729 11:49:19.792594   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.792604   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:19.792611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:19.792674   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:19.826783   70480 cri.go:89] found id: ""
	I0729 11:49:19.826808   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.826815   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:19.826824   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:19.826835   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:19.878538   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:19.878573   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:19.893066   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:19.893094   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:19.966152   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.966177   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:19.966190   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:20.042796   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:20.042831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:20.515422   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.016350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.681192   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.681786   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.486048   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.979078   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:22.581639   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:22.595713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:22.595791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:22.639198   70480 cri.go:89] found id: ""
	I0729 11:49:22.639227   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.639239   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:22.639247   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:22.639304   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:22.681073   70480 cri.go:89] found id: ""
	I0729 11:49:22.681103   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.681117   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:22.681124   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:22.681183   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:22.717186   70480 cri.go:89] found id: ""
	I0729 11:49:22.717216   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.717226   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:22.717233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:22.717293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:22.755508   70480 cri.go:89] found id: ""
	I0729 11:49:22.755536   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.755546   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:22.755563   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:22.755626   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:22.800450   70480 cri.go:89] found id: ""
	I0729 11:49:22.800484   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.800495   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:22.800503   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:22.800567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:22.845555   70480 cri.go:89] found id: ""
	I0729 11:49:22.845581   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.845588   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:22.845594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:22.845643   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:22.895449   70480 cri.go:89] found id: ""
	I0729 11:49:22.895476   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.895483   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:22.895488   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:22.895536   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:22.943041   70480 cri.go:89] found id: ""
	I0729 11:49:22.943071   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.943081   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:22.943092   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:22.943108   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:23.002403   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:23.002448   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:23.019436   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:23.019463   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:23.090680   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:23.090718   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:23.090734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:23.173647   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:23.173687   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:25.515416   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.014796   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.181898   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.680932   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.481482   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.980230   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:25.719961   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:25.733760   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:25.733829   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:25.769965   70480 cri.go:89] found id: ""
	I0729 11:49:25.769997   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.770008   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:25.770015   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:25.770079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:25.805786   70480 cri.go:89] found id: ""
	I0729 11:49:25.805818   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.805829   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:25.805836   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:25.805899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:25.841948   70480 cri.go:89] found id: ""
	I0729 11:49:25.841978   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.841988   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:25.841996   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:25.842056   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:25.880600   70480 cri.go:89] found id: ""
	I0729 11:49:25.880626   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.880636   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:25.880644   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:25.880710   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:25.918637   70480 cri.go:89] found id: ""
	I0729 11:49:25.918671   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.918683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:25.918691   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:25.918766   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:25.955683   70480 cri.go:89] found id: ""
	I0729 11:49:25.955716   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.955726   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:25.955733   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:25.955793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:25.991796   70480 cri.go:89] found id: ""
	I0729 11:49:25.991826   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.991835   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:25.991844   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:25.991908   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:26.027595   70480 cri.go:89] found id: ""
	I0729 11:49:26.027623   70480 logs.go:276] 0 containers: []
	W0729 11:49:26.027634   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:26.027644   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:26.027658   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:26.114463   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:26.114500   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:26.156798   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:26.156834   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:26.206910   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:26.206940   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:26.221037   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:26.221065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:26.293788   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:28.794321   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:28.809573   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:28.809632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:28.860918   70480 cri.go:89] found id: ""
	I0729 11:49:28.860945   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.860952   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:28.860958   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:28.861011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:28.913038   70480 cri.go:89] found id: ""
	I0729 11:49:28.913069   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.913078   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:28.913085   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:28.913147   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:28.963679   70480 cri.go:89] found id: ""
	I0729 11:49:28.963704   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.963714   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:28.963722   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:28.963787   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:29.003935   70480 cri.go:89] found id: ""
	I0729 11:49:29.003962   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.003970   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:29.003976   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:29.004033   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:29.044990   70480 cri.go:89] found id: ""
	I0729 11:49:29.045027   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.045034   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:29.045040   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:29.045096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:29.079923   70480 cri.go:89] found id: ""
	I0729 11:49:29.079945   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.079953   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:29.079958   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:29.080004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:29.117495   70480 cri.go:89] found id: ""
	I0729 11:49:29.117520   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.117528   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:29.117534   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:29.117580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:29.155254   70480 cri.go:89] found id: ""
	I0729 11:49:29.155285   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.155295   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:29.155305   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:29.155319   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:29.207659   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:29.207698   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:29.221875   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:29.221904   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:29.295613   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:29.295637   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:29.295647   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:29.376114   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:29.376148   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:30.515987   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.015616   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:30.687554   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.180446   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.480064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.480740   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.916592   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:31.930301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:31.930373   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:31.969552   70480 cri.go:89] found id: ""
	I0729 11:49:31.969580   70480 logs.go:276] 0 containers: []
	W0729 11:49:31.969588   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:31.969594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:31.969650   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:32.003765   70480 cri.go:89] found id: ""
	I0729 11:49:32.003795   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.003804   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:32.003811   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:32.003873   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:32.038447   70480 cri.go:89] found id: ""
	I0729 11:49:32.038475   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.038486   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:32.038492   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:32.038558   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:32.077766   70480 cri.go:89] found id: ""
	I0729 11:49:32.077793   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.077805   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:32.077813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:32.077866   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:32.112599   70480 cri.go:89] found id: ""
	I0729 11:49:32.112630   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.112640   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:32.112648   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:32.112711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:32.150385   70480 cri.go:89] found id: ""
	I0729 11:49:32.150410   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.150417   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:32.150423   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:32.150481   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:32.185148   70480 cri.go:89] found id: ""
	I0729 11:49:32.185172   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.185182   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:32.185189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:32.185251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:32.219652   70480 cri.go:89] found id: ""
	I0729 11:49:32.219685   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.219696   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:32.219706   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:32.219720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:32.233440   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:32.233468   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:32.300495   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:32.300523   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:32.300540   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:32.381361   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:32.381400   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:32.422678   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:32.422730   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:34.974183   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:34.987832   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:34.987912   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:35.028216   70480 cri.go:89] found id: ""
	I0729 11:49:35.028251   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.028262   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:35.028269   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:35.028333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:35.067585   70480 cri.go:89] found id: ""
	I0729 11:49:35.067616   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.067626   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:35.067634   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:35.067698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:35.103319   70480 cri.go:89] found id: ""
	I0729 11:49:35.103346   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.103355   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:35.103362   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:35.103426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:35.138637   70480 cri.go:89] found id: ""
	I0729 11:49:35.138673   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.138714   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:35.138726   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:35.138804   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:35.176249   70480 cri.go:89] found id: ""
	I0729 11:49:35.176285   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.176293   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:35.176298   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:35.176358   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:35.218168   70480 cri.go:89] found id: ""
	I0729 11:49:35.218194   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.218202   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:35.218208   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:35.218265   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:35.515188   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.518451   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.180771   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.181078   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.979448   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:38.482849   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.253594   70480 cri.go:89] found id: ""
	I0729 11:49:35.253634   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.253641   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:35.253655   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:35.253716   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:35.289229   70480 cri.go:89] found id: ""
	I0729 11:49:35.289258   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.289269   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:35.289279   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:35.289294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:35.341152   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:35.341186   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:35.355884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:35.355925   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:35.426135   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:35.426160   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:35.426172   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:35.508387   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:35.508422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.047364   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:38.061026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:38.061088   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:38.096252   70480 cri.go:89] found id: ""
	I0729 11:49:38.096280   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.096290   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:38.096297   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:38.096365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:38.133007   70480 cri.go:89] found id: ""
	I0729 11:49:38.133040   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.133051   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:38.133058   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:38.133130   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:38.168040   70480 cri.go:89] found id: ""
	I0729 11:49:38.168063   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.168073   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:38.168086   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:38.168160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:38.205440   70480 cri.go:89] found id: ""
	I0729 11:49:38.205464   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.205471   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:38.205476   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:38.205524   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:38.243345   70480 cri.go:89] found id: ""
	I0729 11:49:38.243373   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.243383   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:38.243390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:38.243449   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:38.278506   70480 cri.go:89] found id: ""
	I0729 11:49:38.278537   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.278549   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:38.278557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:38.278616   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:38.317917   70480 cri.go:89] found id: ""
	I0729 11:49:38.317951   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.317962   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:38.317970   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:38.318032   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:38.353198   70480 cri.go:89] found id: ""
	I0729 11:49:38.353228   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.353236   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:38.353245   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:38.353259   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:38.367239   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:38.367268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:38.445964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:38.445989   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:38.446002   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:38.528232   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:38.528268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.565958   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:38.565992   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:40.014625   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:42.015244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:39.682072   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.682635   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:44.180224   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:40.979943   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:43.481875   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.123139   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:41.137233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:41.137299   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:41.179468   70480 cri.go:89] found id: ""
	I0729 11:49:41.179492   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.179502   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:41.179508   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:41.179559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:41.215586   70480 cri.go:89] found id: ""
	I0729 11:49:41.215612   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.215620   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:41.215625   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:41.215682   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:41.259473   70480 cri.go:89] found id: ""
	I0729 11:49:41.259495   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.259503   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:41.259508   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:41.259562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:41.302914   70480 cri.go:89] found id: ""
	I0729 11:49:41.302939   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.302947   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:41.302953   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:41.303012   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:41.339826   70480 cri.go:89] found id: ""
	I0729 11:49:41.339857   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.339868   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:41.339876   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:41.339944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:41.376044   70480 cri.go:89] found id: ""
	I0729 11:49:41.376067   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.376074   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:41.376080   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:41.376126   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:41.412216   70480 cri.go:89] found id: ""
	I0729 11:49:41.412241   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.412249   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:41.412255   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:41.412311   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:41.448264   70480 cri.go:89] found id: ""
	I0729 11:49:41.448294   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.448305   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:41.448315   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:41.448331   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:41.499936   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:41.499974   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:41.517126   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:41.517151   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:41.590153   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:41.590185   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:41.590201   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:41.670830   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:41.670866   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.212782   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:44.226750   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:44.226815   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:44.261492   70480 cri.go:89] found id: ""
	I0729 11:49:44.261517   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.261524   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:44.261530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:44.261577   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:44.296391   70480 cri.go:89] found id: ""
	I0729 11:49:44.296426   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.296435   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:44.296444   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:44.296510   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:44.333335   70480 cri.go:89] found id: ""
	I0729 11:49:44.333365   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.333377   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:44.333384   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:44.333447   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:44.370607   70480 cri.go:89] found id: ""
	I0729 11:49:44.370639   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.370650   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:44.370657   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:44.370734   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:44.404229   70480 cri.go:89] found id: ""
	I0729 11:49:44.404257   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.404265   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:44.404271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:44.404332   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:44.439198   70480 cri.go:89] found id: ""
	I0729 11:49:44.439227   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.439238   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:44.439244   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:44.439302   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:44.473835   70480 cri.go:89] found id: ""
	I0729 11:49:44.473887   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.473899   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:44.473908   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:44.473971   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:44.518006   70480 cri.go:89] found id: ""
	I0729 11:49:44.518031   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.518040   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:44.518050   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:44.518065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.560188   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:44.560222   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:44.609565   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:44.609602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:44.624787   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:44.624826   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:44.707388   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:44.707410   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:44.707422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:44.515480   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.013967   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:46.181170   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:48.680460   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:45.482413   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.484420   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:49.982145   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.283951   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:47.297013   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:47.297080   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:47.331979   70480 cri.go:89] found id: ""
	I0729 11:49:47.332009   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.332018   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:47.332023   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:47.332071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:47.367888   70480 cri.go:89] found id: ""
	I0729 11:49:47.367914   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.367925   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:47.367931   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:47.367991   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:47.409364   70480 cri.go:89] found id: ""
	I0729 11:49:47.409392   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.409404   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:47.409410   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:47.409462   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:47.442554   70480 cri.go:89] found id: ""
	I0729 11:49:47.442583   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.442594   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:47.442602   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:47.442656   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:47.476662   70480 cri.go:89] found id: ""
	I0729 11:49:47.476692   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.476704   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:47.476713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:47.476775   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:47.513779   70480 cri.go:89] found id: ""
	I0729 11:49:47.513809   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.513819   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:47.513827   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:47.513885   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:47.550023   70480 cri.go:89] found id: ""
	I0729 11:49:47.550047   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.550053   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:47.550059   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:47.550120   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:47.586131   70480 cri.go:89] found id: ""
	I0729 11:49:47.586157   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.586165   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:47.586174   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:47.586187   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:47.671326   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:47.671365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:47.710573   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:47.710601   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:47.763248   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:47.763284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:47.779516   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:47.779545   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:47.857474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:49.014878   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:51.515152   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.515473   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.682492   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.179515   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:52.479384   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:54.980972   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.358275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:50.371438   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:50.371501   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:50.408776   70480 cri.go:89] found id: ""
	I0729 11:49:50.408803   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.408813   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:50.408820   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:50.408881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:50.443502   70480 cri.go:89] found id: ""
	I0729 11:49:50.443528   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.443536   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:50.443541   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:50.443600   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:50.480429   70480 cri.go:89] found id: ""
	I0729 11:49:50.480454   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.480463   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:50.480470   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:50.480525   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:50.518753   70480 cri.go:89] found id: ""
	I0729 11:49:50.518779   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.518789   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:50.518796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:50.518838   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:50.555970   70480 cri.go:89] found id: ""
	I0729 11:49:50.556000   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.556010   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:50.556022   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:50.556086   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:50.592342   70480 cri.go:89] found id: ""
	I0729 11:49:50.592374   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.592385   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:50.592392   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:50.592458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:50.628772   70480 cri.go:89] found id: ""
	I0729 11:49:50.628801   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.628813   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:50.628859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:50.628919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:50.677549   70480 cri.go:89] found id: ""
	I0729 11:49:50.677579   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.677588   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:50.677598   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:50.677612   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:50.734543   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:50.734579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:50.749418   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:50.749445   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:50.825728   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:50.825754   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:50.825773   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:50.901579   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:50.901615   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.439920   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:53.453322   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:53.453381   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:53.491594   70480 cri.go:89] found id: ""
	I0729 11:49:53.491622   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.491632   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:53.491638   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:53.491698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:53.527160   70480 cri.go:89] found id: ""
	I0729 11:49:53.527188   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.527201   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:53.527207   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:53.527264   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:53.563787   70480 cri.go:89] found id: ""
	I0729 11:49:53.563819   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.563830   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:53.563838   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:53.563899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:53.601551   70480 cri.go:89] found id: ""
	I0729 11:49:53.601575   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.601583   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:53.601589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:53.601634   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:53.636711   70480 cri.go:89] found id: ""
	I0729 11:49:53.636738   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.636748   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:53.636755   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:53.636824   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:53.677809   70480 cri.go:89] found id: ""
	I0729 11:49:53.677852   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.677864   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:53.677872   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:53.677932   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:53.714548   70480 cri.go:89] found id: ""
	I0729 11:49:53.714579   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.714590   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:53.714597   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:53.714663   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:53.751406   70480 cri.go:89] found id: ""
	I0729 11:49:53.751438   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.751448   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:53.751459   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:53.751474   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:53.834905   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:53.834942   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.880818   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:53.880852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:53.935913   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:53.935948   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:53.950053   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:53.950078   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:54.027378   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.014381   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:58.513958   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:55.180502   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.181274   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.182119   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.479530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.981806   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:56.528379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:56.541859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:56.541930   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:56.580566   70480 cri.go:89] found id: ""
	I0729 11:49:56.580612   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.580621   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:56.580629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:56.580687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:56.616390   70480 cri.go:89] found id: ""
	I0729 11:49:56.616419   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.616427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:56.616433   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:56.616483   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:56.653245   70480 cri.go:89] found id: ""
	I0729 11:49:56.653273   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.653281   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:56.653286   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:56.653345   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:56.693011   70480 cri.go:89] found id: ""
	I0729 11:49:56.693033   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.693041   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:56.693047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:56.693115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:56.729684   70480 cri.go:89] found id: ""
	I0729 11:49:56.729714   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.729723   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:56.729736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:56.729799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:56.764634   70480 cri.go:89] found id: ""
	I0729 11:49:56.764675   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.764684   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:56.764692   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:56.764753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:56.800672   70480 cri.go:89] found id: ""
	I0729 11:49:56.800703   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.800714   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:56.800721   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:56.800784   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:56.838727   70480 cri.go:89] found id: ""
	I0729 11:49:56.838758   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.838769   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:56.838781   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:56.838794   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:56.918017   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.918043   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:56.918057   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:57.011900   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:57.011951   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:57.055320   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:57.055350   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:57.113681   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:57.113725   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:59.629516   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:59.643794   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:59.643872   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:59.682543   70480 cri.go:89] found id: ""
	I0729 11:49:59.682571   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.682580   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:59.682586   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:59.682649   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:59.719313   70480 cri.go:89] found id: ""
	I0729 11:49:59.719341   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.719352   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:59.719360   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:59.719415   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:59.759567   70480 cri.go:89] found id: ""
	I0729 11:49:59.759593   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.759603   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:59.759611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:59.759668   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:59.796159   70480 cri.go:89] found id: ""
	I0729 11:49:59.796180   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.796187   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:59.796192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:59.796247   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:59.836178   70480 cri.go:89] found id: ""
	I0729 11:49:59.836199   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.836207   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:59.836212   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:59.836263   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:59.876751   70480 cri.go:89] found id: ""
	I0729 11:49:59.876783   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.876795   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:59.876802   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:59.876863   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:59.916163   70480 cri.go:89] found id: ""
	I0729 11:49:59.916196   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.916207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:59.916217   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:59.916281   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:59.953581   70480 cri.go:89] found id: ""
	I0729 11:49:59.953611   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.953621   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:59.953631   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:59.953649   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:00.009128   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:00.009167   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:00.024681   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:00.024710   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:00.098939   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:00.098966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:00.098980   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:00.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:00.186125   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:01.015333   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:03.017456   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:01.682621   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.180814   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.480490   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.481157   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.726382   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:02.740727   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:02.740799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:02.779624   70480 cri.go:89] found id: ""
	I0729 11:50:02.779653   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.779664   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:02.779672   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:02.779731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:02.816044   70480 cri.go:89] found id: ""
	I0729 11:50:02.816076   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.816087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:02.816094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:02.816168   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:02.859406   70480 cri.go:89] found id: ""
	I0729 11:50:02.859434   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.859445   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:02.859453   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:02.859514   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:02.897019   70480 cri.go:89] found id: ""
	I0729 11:50:02.897049   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.897058   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:02.897064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:02.897123   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:02.936810   70480 cri.go:89] found id: ""
	I0729 11:50:02.936843   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.936854   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:02.936860   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:02.936919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:02.973391   70480 cri.go:89] found id: ""
	I0729 11:50:02.973412   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.973420   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:02.973426   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:02.973485   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:03.010867   70480 cri.go:89] found id: ""
	I0729 11:50:03.010961   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.010983   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:03.011001   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:03.011082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:03.048287   70480 cri.go:89] found id: ""
	I0729 11:50:03.048321   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.048332   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:03.048343   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:03.048360   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:03.102752   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:03.102790   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:03.117732   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:03.117759   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:03.193620   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:03.193643   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:03.193655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:03.277205   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:03.277250   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:05.513602   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:07.514141   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.181449   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:08.682052   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.980021   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:09.479308   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:05.833546   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:05.849129   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:05.849191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:05.893059   70480 cri.go:89] found id: ""
	I0729 11:50:05.893094   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.893105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:05.893113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:05.893182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:05.927844   70480 cri.go:89] found id: ""
	I0729 11:50:05.927879   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.927889   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:05.927896   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:05.927960   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:05.964427   70480 cri.go:89] found id: ""
	I0729 11:50:05.964451   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.964458   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:05.964464   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:05.964509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:06.001963   70480 cri.go:89] found id: ""
	I0729 11:50:06.001995   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.002002   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:06.002008   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:06.002055   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:06.038838   70480 cri.go:89] found id: ""
	I0729 11:50:06.038869   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.038880   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:06.038888   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:06.038948   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:06.073952   70480 cri.go:89] found id: ""
	I0729 11:50:06.073985   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.073995   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:06.074003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:06.074063   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:06.110495   70480 cri.go:89] found id: ""
	I0729 11:50:06.110524   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.110535   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:06.110541   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:06.110603   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:06.146851   70480 cri.go:89] found id: ""
	I0729 11:50:06.146893   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.146904   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:06.146915   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:06.146931   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:06.201779   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:06.201814   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:06.216407   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:06.216434   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:06.294362   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:06.294382   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:06.294394   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:06.374381   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:06.374415   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:08.920326   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:08.933875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:08.933940   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:08.974589   70480 cri.go:89] found id: ""
	I0729 11:50:08.974615   70480 logs.go:276] 0 containers: []
	W0729 11:50:08.974623   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:08.974629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:08.974691   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:09.013032   70480 cri.go:89] found id: ""
	I0729 11:50:09.013056   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.013066   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:09.013075   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:09.013121   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:09.052365   70480 cri.go:89] found id: ""
	I0729 11:50:09.052390   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.052397   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:09.052402   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:09.052450   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:09.098671   70480 cri.go:89] found id: ""
	I0729 11:50:09.098719   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.098731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:09.098739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:09.098799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:09.148033   70480 cri.go:89] found id: ""
	I0729 11:50:09.148062   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.148083   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:09.148091   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:09.148151   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:09.195140   70480 cri.go:89] found id: ""
	I0729 11:50:09.195172   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.195179   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:09.195185   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:09.195244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:09.242315   70480 cri.go:89] found id: ""
	I0729 11:50:09.242346   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.242356   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:09.242364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:09.242428   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:09.278315   70480 cri.go:89] found id: ""
	I0729 11:50:09.278342   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.278353   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:09.278364   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:09.278377   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:09.327622   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:09.327654   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:09.342383   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:09.342416   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:09.420797   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:09.420821   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:09.420832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:09.499308   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:09.499345   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:09.514809   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.515103   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.515311   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.181981   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.681128   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.480200   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.480991   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:12.042649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:12.057927   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:12.057996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:12.096129   70480 cri.go:89] found id: ""
	I0729 11:50:12.096159   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.096170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:12.096177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:12.096244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:12.133849   70480 cri.go:89] found id: ""
	I0729 11:50:12.133880   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.133891   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:12.133898   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:12.133963   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:12.171706   70480 cri.go:89] found id: ""
	I0729 11:50:12.171730   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.171738   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:12.171744   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:12.171810   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:12.211248   70480 cri.go:89] found id: ""
	I0729 11:50:12.211285   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.211307   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:12.211315   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:12.211379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:12.247472   70480 cri.go:89] found id: ""
	I0729 11:50:12.247500   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.247510   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:12.247517   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:12.247578   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:12.283818   70480 cri.go:89] found id: ""
	I0729 11:50:12.283847   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.283859   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:12.283866   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:12.283937   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:12.324453   70480 cri.go:89] found id: ""
	I0729 11:50:12.324478   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.324485   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:12.324490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:12.324541   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:12.362504   70480 cri.go:89] found id: ""
	I0729 11:50:12.362531   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.362538   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:12.362546   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:12.362558   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:12.439250   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:12.439278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:12.439295   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:12.521240   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:12.521271   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:12.561881   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:12.561918   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:12.615509   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:12.615548   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:15.130823   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:15.151388   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:15.151457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:15.209596   70480 cri.go:89] found id: ""
	I0729 11:50:15.209645   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.209658   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:15.209668   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:15.209736   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:15.515486   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:18.014350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.681466   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.686021   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.979592   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.980955   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.248351   70480 cri.go:89] found id: ""
	I0729 11:50:15.248383   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.248394   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:15.248402   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:15.248459   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:15.287261   70480 cri.go:89] found id: ""
	I0729 11:50:15.287288   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.287296   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:15.287301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:15.287356   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:15.325197   70480 cri.go:89] found id: ""
	I0729 11:50:15.325221   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.325229   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:15.325234   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:15.325292   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:15.361903   70480 cri.go:89] found id: ""
	I0729 11:50:15.361930   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.361939   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:15.361944   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:15.361994   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:15.399963   70480 cri.go:89] found id: ""
	I0729 11:50:15.399996   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.400007   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:15.400015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:15.400079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:15.437366   70480 cri.go:89] found id: ""
	I0729 11:50:15.437400   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.437409   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:15.437414   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:15.437476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:15.473784   70480 cri.go:89] found id: ""
	I0729 11:50:15.473810   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.473827   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:15.473837   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:15.473852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:15.550294   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:15.550328   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:15.550343   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:15.636252   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:15.636297   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:15.681361   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:15.681397   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:15.735415   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:15.735451   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.250767   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:18.265247   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:18.265319   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:18.302793   70480 cri.go:89] found id: ""
	I0729 11:50:18.302819   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.302827   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:18.302833   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:18.302894   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:18.345507   70480 cri.go:89] found id: ""
	I0729 11:50:18.345541   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.345551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:18.345558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:18.345621   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:18.381646   70480 cri.go:89] found id: ""
	I0729 11:50:18.381675   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.381682   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:18.381688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:18.381750   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:18.417233   70480 cri.go:89] found id: ""
	I0729 11:50:18.417261   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.417268   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:18.417275   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:18.417340   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:18.453506   70480 cri.go:89] found id: ""
	I0729 11:50:18.453534   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.453541   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:18.453547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:18.453598   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:18.491886   70480 cri.go:89] found id: ""
	I0729 11:50:18.491910   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.491918   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:18.491923   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:18.491980   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:18.528413   70480 cri.go:89] found id: ""
	I0729 11:50:18.528444   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.528454   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:18.528462   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:18.528518   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:18.565621   70480 cri.go:89] found id: ""
	I0729 11:50:18.565653   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.565663   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:18.565673   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:18.565690   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:18.616796   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:18.616832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.631175   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:18.631202   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:18.712480   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:18.712506   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:18.712520   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:18.797246   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:18.797284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:20.514492   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:23.016174   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.181252   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.682450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.480316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.980474   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:21.344260   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:21.358689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:21.358781   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:21.393248   70480 cri.go:89] found id: ""
	I0729 11:50:21.393276   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.393286   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:21.393293   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:21.393352   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:21.426960   70480 cri.go:89] found id: ""
	I0729 11:50:21.426989   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.426999   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:21.427007   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:21.427066   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:21.463521   70480 cri.go:89] found id: ""
	I0729 11:50:21.463545   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.463553   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:21.463559   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:21.463612   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:21.501915   70480 cri.go:89] found id: ""
	I0729 11:50:21.501950   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.501960   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:21.501966   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:21.502023   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:21.538208   70480 cri.go:89] found id: ""
	I0729 11:50:21.538247   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.538258   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:21.538265   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:21.538327   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:21.572572   70480 cri.go:89] found id: ""
	I0729 11:50:21.572594   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.572602   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:21.572607   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:21.572664   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:21.608002   70480 cri.go:89] found id: ""
	I0729 11:50:21.608037   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.608046   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:21.608053   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:21.608103   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:21.643039   70480 cri.go:89] found id: ""
	I0729 11:50:21.643069   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.643080   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:21.643098   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:21.643115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:21.722921   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:21.722960   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:21.768597   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:21.768628   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:21.826974   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:21.827009   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:21.842214   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:21.842242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:21.913217   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:24.413629   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:24.428364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:24.428458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:24.469631   70480 cri.go:89] found id: ""
	I0729 11:50:24.469654   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.469661   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:24.469667   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:24.469712   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:24.508201   70480 cri.go:89] found id: ""
	I0729 11:50:24.508231   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.508242   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:24.508254   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:24.508317   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:24.543992   70480 cri.go:89] found id: ""
	I0729 11:50:24.544020   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.544028   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:24.544034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:24.544082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:24.579956   70480 cri.go:89] found id: ""
	I0729 11:50:24.579983   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.579990   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:24.579995   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:24.580051   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:24.616233   70480 cri.go:89] found id: ""
	I0729 11:50:24.616259   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.616267   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:24.616273   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:24.616339   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:24.655131   70480 cri.go:89] found id: ""
	I0729 11:50:24.655159   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.655167   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:24.655173   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:24.655223   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:24.701706   70480 cri.go:89] found id: ""
	I0729 11:50:24.701730   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.701738   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:24.701743   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:24.701799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:24.739729   70480 cri.go:89] found id: ""
	I0729 11:50:24.739754   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.739762   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:24.739773   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:24.739785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:24.817347   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:24.817390   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:24.858248   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:24.858274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:24.911486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:24.911527   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:24.927180   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:24.927209   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:25.007474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:25.515125   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.515919   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:24.682503   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.180867   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:29.181299   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:25.478971   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.979128   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.507887   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:27.521936   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:27.522004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:27.555832   70480 cri.go:89] found id: ""
	I0729 11:50:27.555865   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.555875   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:27.555882   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:27.555944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:27.590482   70480 cri.go:89] found id: ""
	I0729 11:50:27.590509   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.590518   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:27.590526   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:27.590587   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:27.626998   70480 cri.go:89] found id: ""
	I0729 11:50:27.627028   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.627038   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:27.627045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:27.627105   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:27.661980   70480 cri.go:89] found id: ""
	I0729 11:50:27.662015   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.662027   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:27.662034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:27.662096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:27.696646   70480 cri.go:89] found id: ""
	I0729 11:50:27.696675   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.696683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:27.696689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:27.696735   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:27.739393   70480 cri.go:89] found id: ""
	I0729 11:50:27.739421   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.739432   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:27.739439   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:27.739500   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:27.774985   70480 cri.go:89] found id: ""
	I0729 11:50:27.775013   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.775027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:27.775034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:27.775097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:27.811526   70480 cri.go:89] found id: ""
	I0729 11:50:27.811555   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.811567   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:27.811578   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:27.811594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:27.866445   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:27.866482   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:27.881961   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:27.881991   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:27.949524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:27.949543   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:27.949555   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:28.029386   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:28.029418   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:30.014858   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.515721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:31.183830   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:33.681416   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.479786   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.484195   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:34.978772   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.571850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:30.586163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:30.586241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:30.622065   70480 cri.go:89] found id: ""
	I0729 11:50:30.622096   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.622105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:30.622113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:30.622189   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:30.659346   70480 cri.go:89] found id: ""
	I0729 11:50:30.659386   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.659398   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:30.659405   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:30.659467   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:30.699376   70480 cri.go:89] found id: ""
	I0729 11:50:30.699403   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.699413   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:30.699421   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:30.699490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:30.734843   70480 cri.go:89] found id: ""
	I0729 11:50:30.734873   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.734892   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:30.734900   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:30.734974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:30.772983   70480 cri.go:89] found id: ""
	I0729 11:50:30.773010   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.773021   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:30.773028   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:30.773084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:30.814777   70480 cri.go:89] found id: ""
	I0729 11:50:30.814805   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.814815   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:30.814823   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:30.814891   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:30.851989   70480 cri.go:89] found id: ""
	I0729 11:50:30.852018   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.852027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:30.852036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:30.852094   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:30.891692   70480 cri.go:89] found id: ""
	I0729 11:50:30.891716   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.891732   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:30.891743   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:30.891758   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:30.943466   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:30.943498   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:30.957182   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:30.957208   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:31.029695   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:31.029717   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:31.029731   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:31.113329   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:31.113378   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:33.654275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:33.668509   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:33.668581   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:33.706103   70480 cri.go:89] found id: ""
	I0729 11:50:33.706133   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.706144   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:33.706151   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:33.706203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:33.743389   70480 cri.go:89] found id: ""
	I0729 11:50:33.743417   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.743424   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:33.743431   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:33.743482   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:33.781980   70480 cri.go:89] found id: ""
	I0729 11:50:33.782014   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.782025   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:33.782032   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:33.782092   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:33.818049   70480 cri.go:89] found id: ""
	I0729 11:50:33.818080   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.818090   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:33.818098   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:33.818164   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:33.854043   70480 cri.go:89] found id: ""
	I0729 11:50:33.854069   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.854077   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:33.854083   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:33.854144   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:33.891292   70480 cri.go:89] found id: ""
	I0729 11:50:33.891319   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.891329   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:33.891338   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:33.891400   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:33.927871   70480 cri.go:89] found id: ""
	I0729 11:50:33.927904   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.927915   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:33.927922   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:33.927979   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:33.964136   70480 cri.go:89] found id: ""
	I0729 11:50:33.964163   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.964170   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:33.964181   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:33.964195   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:34.015262   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:34.015292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:34.029677   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:34.029711   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:34.105907   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:34.105932   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:34.105945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:34.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:34.186120   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:35.014404   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:37.015435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:35.681610   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:38.181485   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.979912   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:39.480001   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.740552   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:36.754472   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:36.754533   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:36.793126   70480 cri.go:89] found id: ""
	I0729 11:50:36.793162   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.793170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:36.793178   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:36.793235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:36.831095   70480 cri.go:89] found id: ""
	I0729 11:50:36.831152   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.831166   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:36.831176   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:36.831235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:36.869236   70480 cri.go:89] found id: ""
	I0729 11:50:36.869266   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.869277   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:36.869284   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:36.869343   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:36.907163   70480 cri.go:89] found id: ""
	I0729 11:50:36.907195   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.907203   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:36.907209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:36.907267   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:36.943074   70480 cri.go:89] found id: ""
	I0729 11:50:36.943101   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.943110   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:36.943115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:36.943177   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:36.980411   70480 cri.go:89] found id: ""
	I0729 11:50:36.980432   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.980442   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:36.980449   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:36.980509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:37.017994   70480 cri.go:89] found id: ""
	I0729 11:50:37.018015   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.018028   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:37.018035   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:37.018091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:37.052761   70480 cri.go:89] found id: ""
	I0729 11:50:37.052787   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.052797   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:37.052806   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:37.052818   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:37.105925   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:37.105970   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:37.119829   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:37.119862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:37.198953   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:37.198992   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:37.199013   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:37.276947   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:37.276987   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:39.816196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:39.830387   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:39.830460   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:39.864879   70480 cri.go:89] found id: ""
	I0729 11:50:39.864914   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.864921   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:39.864927   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:39.864974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:39.902730   70480 cri.go:89] found id: ""
	I0729 11:50:39.902761   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.902772   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:39.902779   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:39.902832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:39.937630   70480 cri.go:89] found id: ""
	I0729 11:50:39.937656   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.937663   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:39.937669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:39.937718   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:39.972692   70480 cri.go:89] found id: ""
	I0729 11:50:39.972723   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.972731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:39.972736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:39.972798   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:40.016152   70480 cri.go:89] found id: ""
	I0729 11:50:40.016179   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.016187   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:40.016192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:40.016239   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:40.051206   70480 cri.go:89] found id: ""
	I0729 11:50:40.051233   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.051243   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:40.051249   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:40.051310   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:40.091007   70480 cri.go:89] found id: ""
	I0729 11:50:40.091039   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.091050   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:40.091057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:40.091122   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:40.130939   70480 cri.go:89] found id: ""
	I0729 11:50:40.130968   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.130979   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:40.130992   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:40.131011   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:40.210551   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:40.210579   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:40.210594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:39.514683   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.515289   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.515935   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.681167   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:42.683536   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.978995   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.979276   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.292853   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:40.292889   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:40.333337   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:40.333365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:40.383219   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:40.383254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:42.898275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:42.913287   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:42.913365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:42.948662   70480 cri.go:89] found id: ""
	I0729 11:50:42.948696   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.948709   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:42.948716   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:42.948768   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:42.989511   70480 cri.go:89] found id: ""
	I0729 11:50:42.989541   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.989551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:42.989558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:42.989609   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:43.025987   70480 cri.go:89] found id: ""
	I0729 11:50:43.026013   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.026021   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:43.026026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:43.026082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:43.062210   70480 cri.go:89] found id: ""
	I0729 11:50:43.062243   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.062253   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:43.062271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:43.062344   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:43.104967   70480 cri.go:89] found id: ""
	I0729 11:50:43.104990   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.104997   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:43.105003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:43.105081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:43.146433   70480 cri.go:89] found id: ""
	I0729 11:50:43.146467   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.146479   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:43.146487   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:43.146551   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:43.199618   70480 cri.go:89] found id: ""
	I0729 11:50:43.199647   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.199658   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:43.199665   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:43.199721   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:43.238994   70480 cri.go:89] found id: ""
	I0729 11:50:43.239025   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.239036   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:43.239053   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:43.239071   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:43.253185   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:43.253211   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:43.325381   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:43.325399   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:43.325410   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:43.408547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:43.408582   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:43.447251   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:43.447281   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:45.516120   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.015236   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:45.181461   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:47.682648   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.478782   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.479013   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.001731   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:46.017006   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:46.017084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:46.056451   70480 cri.go:89] found id: ""
	I0729 11:50:46.056480   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.056492   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:46.056500   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:46.056562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:46.094715   70480 cri.go:89] found id: ""
	I0729 11:50:46.094754   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.094762   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:46.094767   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:46.094817   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:46.131440   70480 cri.go:89] found id: ""
	I0729 11:50:46.131471   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.131483   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:46.131490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:46.131548   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:46.171239   70480 cri.go:89] found id: ""
	I0729 11:50:46.171264   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.171271   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:46.171278   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:46.171331   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:46.208060   70480 cri.go:89] found id: ""
	I0729 11:50:46.208094   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.208102   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:46.208108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:46.208162   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:46.244765   70480 cri.go:89] found id: ""
	I0729 11:50:46.244797   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.244806   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:46.244813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:46.244874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:46.281932   70480 cri.go:89] found id: ""
	I0729 11:50:46.281965   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.281977   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:46.281986   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:46.282058   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:46.319589   70480 cri.go:89] found id: ""
	I0729 11:50:46.319618   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.319629   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:46.319640   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:46.319655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:46.369821   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:46.369859   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:46.384828   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:46.384862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:46.460755   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:46.460780   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:46.460793   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:46.543424   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:46.543459   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.089661   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:49.103781   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:49.103878   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:49.140206   70480 cri.go:89] found id: ""
	I0729 11:50:49.140234   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.140242   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:49.140248   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:49.140306   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:49.181047   70480 cri.go:89] found id: ""
	I0729 11:50:49.181077   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.181087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:49.181094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:49.181160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:49.217106   70480 cri.go:89] found id: ""
	I0729 11:50:49.217135   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.217145   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:49.217152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:49.217213   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:49.256920   70480 cri.go:89] found id: ""
	I0729 11:50:49.256955   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.256966   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:49.256973   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:49.257040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:49.294586   70480 cri.go:89] found id: ""
	I0729 11:50:49.294610   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.294618   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:49.294623   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:49.294687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:49.332441   70480 cri.go:89] found id: ""
	I0729 11:50:49.332467   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.332475   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:49.332480   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:49.332538   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:49.370255   70480 cri.go:89] found id: ""
	I0729 11:50:49.370281   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.370288   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:49.370293   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:49.370348   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:49.407106   70480 cri.go:89] found id: ""
	I0729 11:50:49.407142   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.407150   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:49.407158   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:49.407170   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:49.487262   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:49.487294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.530381   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:49.530407   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:49.585601   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:49.585640   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:49.608868   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:49.608909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:49.717369   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:50.513962   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.514789   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.181505   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.681593   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.483654   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.978973   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:54.979504   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.218194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:52.232762   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:52.232830   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:52.269399   70480 cri.go:89] found id: ""
	I0729 11:50:52.269427   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.269435   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:52.269441   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:52.269488   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:52.304375   70480 cri.go:89] found id: ""
	I0729 11:50:52.304405   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.304415   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:52.304421   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:52.304471   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:52.339374   70480 cri.go:89] found id: ""
	I0729 11:50:52.339406   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.339423   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:52.339431   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:52.339490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:52.375675   70480 cri.go:89] found id: ""
	I0729 11:50:52.375704   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.375715   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:52.375724   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:52.375785   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:52.411587   70480 cri.go:89] found id: ""
	I0729 11:50:52.411612   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.411620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:52.411625   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:52.411677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:52.445267   70480 cri.go:89] found id: ""
	I0729 11:50:52.445291   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.445301   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:52.445308   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:52.445367   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:52.491320   70480 cri.go:89] found id: ""
	I0729 11:50:52.491352   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.491361   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:52.491376   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:52.491432   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:52.528158   70480 cri.go:89] found id: ""
	I0729 11:50:52.528195   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.528205   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:52.528214   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:52.528229   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:52.584122   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:52.584156   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:52.598572   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:52.598611   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:52.675433   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:52.675451   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:52.675473   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:52.759393   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:52.759433   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:55.014201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.015293   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.181456   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.680557   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:56.980460   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:58.982179   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.300231   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:55.316902   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:55.316972   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:55.362311   70480 cri.go:89] found id: ""
	I0729 11:50:55.362350   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.362360   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:55.362368   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:55.362434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:55.420481   70480 cri.go:89] found id: ""
	I0729 11:50:55.420506   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.420519   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:55.420524   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:55.420582   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:55.472515   70480 cri.go:89] found id: ""
	I0729 11:50:55.472546   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.472556   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:55.472565   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:55.472625   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:55.513203   70480 cri.go:89] found id: ""
	I0729 11:50:55.513224   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.513232   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:55.513237   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:55.513290   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:55.548410   70480 cri.go:89] found id: ""
	I0729 11:50:55.548440   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.548450   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:55.548457   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:55.548517   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:55.584532   70480 cri.go:89] found id: ""
	I0729 11:50:55.584561   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.584571   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:55.584577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:55.584640   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:55.627593   70480 cri.go:89] found id: ""
	I0729 11:50:55.627623   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.627652   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:55.627660   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:55.627723   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:55.667981   70480 cri.go:89] found id: ""
	I0729 11:50:55.668005   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.668014   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:55.668021   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:55.668050   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:55.721569   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:55.721605   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:55.735570   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:55.735598   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:55.810549   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:55.810578   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:55.810590   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:55.892547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:55.892594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:58.435946   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:58.449628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:58.449693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:58.487449   70480 cri.go:89] found id: ""
	I0729 11:50:58.487481   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.487499   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:58.487507   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:58.487574   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:58.524030   70480 cri.go:89] found id: ""
	I0729 11:50:58.524051   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.524058   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:58.524063   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:58.524118   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:58.560348   70480 cri.go:89] found id: ""
	I0729 11:50:58.560374   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.560381   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:58.560386   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:58.560434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:58.598947   70480 cri.go:89] found id: ""
	I0729 11:50:58.598974   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.598984   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:58.598992   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:58.599050   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:58.634763   70480 cri.go:89] found id: ""
	I0729 11:50:58.634789   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.634799   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:58.634807   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:58.634867   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:58.671609   70480 cri.go:89] found id: ""
	I0729 11:50:58.671639   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.671649   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:58.671656   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:58.671715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:58.712629   70480 cri.go:89] found id: ""
	I0729 11:50:58.712654   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.712661   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:58.712669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:58.712719   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:58.749749   70480 cri.go:89] found id: ""
	I0729 11:50:58.749779   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.749788   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:58.749799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:58.749813   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:58.807124   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:58.807159   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:58.821486   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:58.821513   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:58.889226   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:58.889248   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:58.889263   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:58.968593   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:58.968633   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:59.515675   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.015006   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:59.681443   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.181409   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:04.183067   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.482470   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:03.482794   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.511112   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:01.525124   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:01.525221   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:01.559415   70480 cri.go:89] found id: ""
	I0729 11:51:01.559450   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.559462   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:01.559469   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:01.559530   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:01.593614   70480 cri.go:89] found id: ""
	I0729 11:51:01.593644   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.593655   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:01.593661   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:01.593722   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:01.632365   70480 cri.go:89] found id: ""
	I0729 11:51:01.632398   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.632409   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:01.632416   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:01.632476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:01.670518   70480 cri.go:89] found id: ""
	I0729 11:51:01.670543   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.670550   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:01.670557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:01.670618   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:01.709732   70480 cri.go:89] found id: ""
	I0729 11:51:01.709755   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.709762   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:01.709768   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:01.709813   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:01.751710   70480 cri.go:89] found id: ""
	I0729 11:51:01.751739   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.751749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:01.751756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:01.751818   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:01.795819   70480 cri.go:89] found id: ""
	I0729 11:51:01.795848   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.795859   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:01.795867   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:01.795931   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:01.838654   70480 cri.go:89] found id: ""
	I0729 11:51:01.838684   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.838691   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:01.838719   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:01.838734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:01.894328   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:01.894370   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:01.908870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:01.908894   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:01.982740   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:01.982770   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:01.982785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:02.068332   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:02.068376   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.614758   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:04.629646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:04.629708   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:04.666502   70480 cri.go:89] found id: ""
	I0729 11:51:04.666526   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.666534   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:04.666540   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:04.666590   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:04.705373   70480 cri.go:89] found id: ""
	I0729 11:51:04.705398   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.705407   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:04.705414   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:04.705468   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:04.745076   70480 cri.go:89] found id: ""
	I0729 11:51:04.745110   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.745122   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:04.745130   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:04.745195   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:04.786923   70480 cri.go:89] found id: ""
	I0729 11:51:04.786953   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.786963   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:04.786971   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:04.787031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:04.824483   70480 cri.go:89] found id: ""
	I0729 11:51:04.824514   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.824522   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:04.824530   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:04.824591   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:04.861347   70480 cri.go:89] found id: ""
	I0729 11:51:04.861379   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.861390   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:04.861399   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:04.861456   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:04.902102   70480 cri.go:89] found id: ""
	I0729 11:51:04.902136   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.902143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:04.902149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:04.902206   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:04.942533   70480 cri.go:89] found id: ""
	I0729 11:51:04.942560   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.942568   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:04.942576   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:04.942589   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:04.997272   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:04.997309   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:05.012276   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:05.012307   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:05.088408   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:05.088434   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:05.088462   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:05.173414   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:05.173452   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.514092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.016150   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:06.680804   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:08.681656   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:05.978846   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.979974   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.718649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:07.732209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:07.732282   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:07.767128   70480 cri.go:89] found id: ""
	I0729 11:51:07.767157   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.767164   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:07.767170   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:07.767233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:07.803210   70480 cri.go:89] found id: ""
	I0729 11:51:07.803243   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.803253   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:07.803260   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:07.803323   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:07.839694   70480 cri.go:89] found id: ""
	I0729 11:51:07.839718   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.839726   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:07.839732   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:07.839779   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:07.876660   70480 cri.go:89] found id: ""
	I0729 11:51:07.876687   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.876695   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:07.876701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:07.876758   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:07.923089   70480 cri.go:89] found id: ""
	I0729 11:51:07.923119   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.923128   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:07.923139   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:07.923191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:07.961178   70480 cri.go:89] found id: ""
	I0729 11:51:07.961205   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.961214   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:07.961223   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:07.961283   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:08.001007   70480 cri.go:89] found id: ""
	I0729 11:51:08.001031   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.001038   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:08.001047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:08.001115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:08.036930   70480 cri.go:89] found id: ""
	I0729 11:51:08.036956   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.036964   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:08.036972   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:08.036982   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:08.091405   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:08.091440   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:08.106456   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:08.106483   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:08.181814   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:08.181835   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:08.181846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:08.267663   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:08.267701   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:09.514482   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.514970   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.182959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:13.680925   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.481614   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:12.482016   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:14.980848   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.814602   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:10.828290   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:10.828351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:10.866374   70480 cri.go:89] found id: ""
	I0729 11:51:10.866398   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.866406   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:10.866412   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:10.866457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:10.908253   70480 cri.go:89] found id: ""
	I0729 11:51:10.908286   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.908295   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:10.908301   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:10.908370   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:10.952686   70480 cri.go:89] found id: ""
	I0729 11:51:10.952709   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.952717   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:10.952723   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:10.952771   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:10.991621   70480 cri.go:89] found id: ""
	I0729 11:51:10.991650   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.991661   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:10.991668   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:10.991728   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:11.028420   70480 cri.go:89] found id: ""
	I0729 11:51:11.028451   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.028462   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:11.028469   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:11.028520   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:11.065213   70480 cri.go:89] found id: ""
	I0729 11:51:11.065248   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.065259   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:11.065266   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:11.065328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:11.104008   70480 cri.go:89] found id: ""
	I0729 11:51:11.104051   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.104064   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:11.104073   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:11.104134   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:11.140872   70480 cri.go:89] found id: ""
	I0729 11:51:11.140913   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.140925   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:11.140936   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:11.140958   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:11.222498   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:11.222535   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:11.265869   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:11.265909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:11.319889   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:11.319926   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:11.334069   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:11.334100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:11.412461   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:13.913194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:13.927057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:13.927141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:13.963267   70480 cri.go:89] found id: ""
	I0729 11:51:13.963302   70480 logs.go:276] 0 containers: []
	W0729 11:51:13.963312   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:13.963321   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:13.963386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:14.002591   70480 cri.go:89] found id: ""
	I0729 11:51:14.002621   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.002633   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:14.002640   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:14.002737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:14.041377   70480 cri.go:89] found id: ""
	I0729 11:51:14.041410   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.041422   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:14.041437   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:14.041502   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:14.081794   70480 cri.go:89] found id: ""
	I0729 11:51:14.081821   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.081829   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:14.081835   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:14.081888   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:14.121223   70480 cri.go:89] found id: ""
	I0729 11:51:14.121251   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.121261   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:14.121269   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:14.121333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:14.156762   70480 cri.go:89] found id: ""
	I0729 11:51:14.156798   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.156808   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:14.156817   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:14.156881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:14.195068   70480 cri.go:89] found id: ""
	I0729 11:51:14.195098   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.195108   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:14.195115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:14.195185   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:14.233361   70480 cri.go:89] found id: ""
	I0729 11:51:14.233392   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.233402   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:14.233413   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:14.233428   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:14.289276   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:14.289318   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:14.304505   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:14.304536   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:14.376648   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:14.376673   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:14.376685   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:14.453538   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:14.453574   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:14.016205   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:16.514374   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.514902   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:15.681382   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.181597   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.479865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:19.480304   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.004150   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:17.018324   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:17.018401   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:17.056923   70480 cri.go:89] found id: ""
	I0729 11:51:17.056959   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.056970   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:17.056978   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:17.057042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:17.093329   70480 cri.go:89] found id: ""
	I0729 11:51:17.093362   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.093374   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:17.093381   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:17.093443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:17.130340   70480 cri.go:89] found id: ""
	I0729 11:51:17.130372   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.130382   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:17.130391   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:17.130457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:17.164877   70480 cri.go:89] found id: ""
	I0729 11:51:17.164902   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.164910   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:17.164915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:17.164962   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:17.201509   70480 cri.go:89] found id: ""
	I0729 11:51:17.201538   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.201549   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:17.201555   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:17.201629   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:17.242097   70480 cri.go:89] found id: ""
	I0729 11:51:17.242121   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.242130   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:17.242136   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:17.242182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:17.277239   70480 cri.go:89] found id: ""
	I0729 11:51:17.277262   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.277270   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:17.277279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:17.277328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:17.310991   70480 cri.go:89] found id: ""
	I0729 11:51:17.311022   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.311034   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:17.311046   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:17.311061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:17.386672   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:17.386718   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:17.424880   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:17.424905   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:17.477226   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:17.477255   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:17.492327   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:17.492354   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:17.570971   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.071148   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:20.085734   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:20.085821   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:20.120455   70480 cri.go:89] found id: ""
	I0729 11:51:20.120500   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.120512   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:20.120530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:20.120589   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:20.165159   70480 cri.go:89] found id: ""
	I0729 11:51:20.165186   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.165193   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:20.165199   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:20.165245   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:20.202147   70480 cri.go:89] found id: ""
	I0729 11:51:20.202168   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.202175   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:20.202182   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:20.202237   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:20.515560   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.014288   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.681542   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.181158   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:21.978106   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.979809   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.237783   70480 cri.go:89] found id: ""
	I0729 11:51:20.237811   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.237822   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:20.237829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:20.237890   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:20.280812   70480 cri.go:89] found id: ""
	I0729 11:51:20.280839   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.280852   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:20.280858   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:20.280922   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:20.317904   70480 cri.go:89] found id: ""
	I0729 11:51:20.317925   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.317932   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:20.317938   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:20.317986   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:20.356106   70480 cri.go:89] found id: ""
	I0729 11:51:20.356136   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.356143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:20.356149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:20.356197   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:20.393483   70480 cri.go:89] found id: ""
	I0729 11:51:20.393514   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.393526   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:20.393537   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:20.393552   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:20.446650   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:20.446716   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:20.460502   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:20.460531   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:20.535717   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.535738   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:20.535751   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:20.619068   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:20.619119   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.159775   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:23.174688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:23.174776   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:23.210990   70480 cri.go:89] found id: ""
	I0729 11:51:23.211017   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.211025   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:23.211031   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:23.211083   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:23.251468   70480 cri.go:89] found id: ""
	I0729 11:51:23.251494   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.251505   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:23.251512   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:23.251567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:23.288902   70480 cri.go:89] found id: ""
	I0729 11:51:23.288950   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.288961   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:23.288969   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:23.289028   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:23.324544   70480 cri.go:89] found id: ""
	I0729 11:51:23.324583   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.324593   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:23.324604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:23.324681   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:23.363138   70480 cri.go:89] found id: ""
	I0729 11:51:23.363170   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.363180   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:23.363188   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:23.363246   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:23.400107   70480 cri.go:89] found id: ""
	I0729 11:51:23.400136   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.400146   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:23.400163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:23.400224   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:23.437129   70480 cri.go:89] found id: ""
	I0729 11:51:23.437169   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.437180   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:23.437189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:23.437251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:23.470782   70480 cri.go:89] found id: ""
	I0729 11:51:23.470811   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.470821   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:23.470831   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:23.470846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:23.557806   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:23.557843   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.601312   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:23.601342   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:23.658042   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:23.658084   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:23.682844   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:23.682878   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:23.773049   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:25.015099   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.518243   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:25.680468   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.680741   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.479529   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:28.978442   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.273263   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:26.286759   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:26.286822   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:26.325303   70480 cri.go:89] found id: ""
	I0729 11:51:26.325330   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.325340   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:26.325347   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:26.325402   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:26.366994   70480 cri.go:89] found id: ""
	I0729 11:51:26.367019   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.367033   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:26.367040   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:26.367100   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:26.407748   70480 cri.go:89] found id: ""
	I0729 11:51:26.407779   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.407789   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:26.407796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:26.407856   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:26.443172   70480 cri.go:89] found id: ""
	I0729 11:51:26.443197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.443206   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:26.443214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:26.443275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:26.479906   70480 cri.go:89] found id: ""
	I0729 11:51:26.479928   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.479937   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:26.479945   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:26.480011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:26.518837   70480 cri.go:89] found id: ""
	I0729 11:51:26.518867   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.518877   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:26.518884   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:26.518939   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:26.557168   70480 cri.go:89] found id: ""
	I0729 11:51:26.557197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.557207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:26.557214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:26.557271   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:26.591670   70480 cri.go:89] found id: ""
	I0729 11:51:26.591699   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.591707   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:26.591715   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:26.591727   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:26.606611   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:26.606641   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:26.675726   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:26.675752   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:26.675768   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:26.755738   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:26.755776   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:26.799482   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:26.799518   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:29.353415   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:29.367062   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:29.367141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:29.404628   70480 cri.go:89] found id: ""
	I0729 11:51:29.404669   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.404677   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:29.404683   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:29.404731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:29.452828   70480 cri.go:89] found id: ""
	I0729 11:51:29.452858   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.452868   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:29.452877   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:29.452936   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:29.488252   70480 cri.go:89] found id: ""
	I0729 11:51:29.488280   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.488288   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:29.488296   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:29.488357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:29.524837   70480 cri.go:89] found id: ""
	I0729 11:51:29.524863   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.524874   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:29.524890   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:29.524938   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:29.564545   70480 cri.go:89] found id: ""
	I0729 11:51:29.564587   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.564598   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:29.564615   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:29.564686   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:29.604254   70480 cri.go:89] found id: ""
	I0729 11:51:29.604282   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.604292   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:29.604299   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:29.604365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:29.640107   70480 cri.go:89] found id: ""
	I0729 11:51:29.640137   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.640147   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:29.640152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:29.640207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:29.677070   70480 cri.go:89] found id: ""
	I0729 11:51:29.677099   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.677109   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:29.677119   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:29.677143   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:29.692434   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:29.692466   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:29.769317   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:29.769344   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:29.769355   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:29.850468   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:29.850505   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:29.890975   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:29.891010   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:30.014896   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.014991   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:29.682442   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.181766   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:34.182032   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:30.979636   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:33.480377   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.445320   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:32.459458   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:32.459534   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:32.500629   70480 cri.go:89] found id: ""
	I0729 11:51:32.500652   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.500659   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:32.500664   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:32.500711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:32.538737   70480 cri.go:89] found id: ""
	I0729 11:51:32.538763   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.538771   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:32.538777   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:32.538837   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:32.578093   70480 cri.go:89] found id: ""
	I0729 11:51:32.578124   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.578134   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:32.578141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:32.578200   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:32.615501   70480 cri.go:89] found id: ""
	I0729 11:51:32.615527   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.615539   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:32.615547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:32.615597   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:32.653232   70480 cri.go:89] found id: ""
	I0729 11:51:32.653262   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.653272   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:32.653279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:32.653338   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:32.691385   70480 cri.go:89] found id: ""
	I0729 11:51:32.691408   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.691418   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:32.691440   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:32.691486   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:32.727625   70480 cri.go:89] found id: ""
	I0729 11:51:32.727657   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.727667   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:32.727674   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:32.727737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:32.762200   70480 cri.go:89] found id: ""
	I0729 11:51:32.762225   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.762232   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:32.762240   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:32.762251   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:32.815020   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:32.815061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:32.830072   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:32.830100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:32.902248   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:32.902278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:32.902292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:32.984571   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:32.984604   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:34.513960   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.514684   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.515512   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.680403   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.681176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.979834   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.482035   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.528850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:35.543568   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:35.543633   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:35.583537   70480 cri.go:89] found id: ""
	I0729 11:51:35.583564   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.583571   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:35.583578   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:35.583632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:35.621997   70480 cri.go:89] found id: ""
	I0729 11:51:35.622021   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.622028   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:35.622034   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:35.622090   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:35.659062   70480 cri.go:89] found id: ""
	I0729 11:51:35.659091   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.659102   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:35.659109   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:35.659169   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:35.696644   70480 cri.go:89] found id: ""
	I0729 11:51:35.696679   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.696689   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:35.696701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:35.696753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:35.734321   70480 cri.go:89] found id: ""
	I0729 11:51:35.734348   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.734358   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:35.734366   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:35.734426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:35.771540   70480 cri.go:89] found id: ""
	I0729 11:51:35.771574   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.771586   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:35.771604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:35.771684   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:35.811293   70480 cri.go:89] found id: ""
	I0729 11:51:35.811318   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.811326   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:35.811332   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:35.811386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:35.857175   70480 cri.go:89] found id: ""
	I0729 11:51:35.857206   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.857217   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:35.857228   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:35.857242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:35.946191   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:35.946210   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:35.946225   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:36.033466   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:36.033501   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:36.072593   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:36.072622   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:36.129193   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:36.129228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:38.645464   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:38.659318   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:38.659378   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:38.697259   70480 cri.go:89] found id: ""
	I0729 11:51:38.697289   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.697298   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:38.697304   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:38.697351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:38.736722   70480 cri.go:89] found id: ""
	I0729 11:51:38.736751   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.736763   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:38.736770   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:38.736828   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:38.773508   70480 cri.go:89] found id: ""
	I0729 11:51:38.773532   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.773539   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:38.773545   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:38.773607   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:38.813147   70480 cri.go:89] found id: ""
	I0729 11:51:38.813177   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.813186   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:38.813193   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:38.813249   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:38.850588   70480 cri.go:89] found id: ""
	I0729 11:51:38.850621   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.850631   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:38.850639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:38.850694   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:38.888260   70480 cri.go:89] found id: ""
	I0729 11:51:38.888293   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.888304   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:38.888313   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:38.888380   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:38.924326   70480 cri.go:89] found id: ""
	I0729 11:51:38.924352   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.924360   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:38.924365   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:38.924426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:38.963356   70480 cri.go:89] found id: ""
	I0729 11:51:38.963386   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.963397   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:38.963408   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:38.963425   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:39.048438   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:39.048472   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:39.087799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:39.087828   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:39.141908   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:39.141945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:39.156242   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:39.156282   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:39.231689   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:41.014799   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.015914   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.180241   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.180737   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:40.980126   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.480593   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.732860   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:41.752371   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:41.752451   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:41.804525   70480 cri.go:89] found id: ""
	I0729 11:51:41.804553   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.804565   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:41.804575   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:41.804632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:41.859983   70480 cri.go:89] found id: ""
	I0729 11:51:41.860010   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.860018   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:41.860024   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:41.860081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:41.909592   70480 cri.go:89] found id: ""
	I0729 11:51:41.909622   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.909632   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:41.909639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:41.909700   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:41.949892   70480 cri.go:89] found id: ""
	I0729 11:51:41.949919   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.949928   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:41.949933   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:41.950011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:41.985334   70480 cri.go:89] found id: ""
	I0729 11:51:41.985360   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.985368   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:41.985374   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:41.985426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:42.028747   70480 cri.go:89] found id: ""
	I0729 11:51:42.028806   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.028818   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:42.028829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:42.028899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:42.063927   70480 cri.go:89] found id: ""
	I0729 11:51:42.063955   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.063965   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:42.063972   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:42.064031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:42.099539   70480 cri.go:89] found id: ""
	I0729 11:51:42.099566   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.099577   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:42.099587   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:42.099602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:42.112852   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:42.112879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:42.185104   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:42.185130   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:42.185142   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:42.265744   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:42.265778   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:42.309451   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:42.309478   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.862271   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:44.876763   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:44.876832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:44.921157   70480 cri.go:89] found id: ""
	I0729 11:51:44.921188   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.921198   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:44.921206   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:44.921266   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:44.960222   70480 cri.go:89] found id: ""
	I0729 11:51:44.960255   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.960265   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:44.960272   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:44.960334   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:44.999994   70480 cri.go:89] found id: ""
	I0729 11:51:45.000025   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.000036   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:45.000045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:45.000108   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:45.037110   70480 cri.go:89] found id: ""
	I0729 11:51:45.037145   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.037156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:45.037163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:45.037215   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:45.074406   70480 cri.go:89] found id: ""
	I0729 11:51:45.074430   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.074440   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:45.074447   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:45.074508   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:45.113069   70480 cri.go:89] found id: ""
	I0729 11:51:45.113097   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.113107   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:45.113115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:45.113178   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:45.158016   70480 cri.go:89] found id: ""
	I0729 11:51:45.158045   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.158055   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:45.158063   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:45.158115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:45.199257   70480 cri.go:89] found id: ""
	I0729 11:51:45.199286   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.199294   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:45.199303   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:45.199314   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.509117   69907 pod_ready.go:81] duration metric: took 4m0.000903528s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	E0729 11:51:44.509148   69907 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:51:44.509164   69907 pod_ready.go:38] duration metric: took 4m6.540840543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:51:44.509191   69907 kubeadm.go:597] duration metric: took 4m16.180899614s to restartPrimaryControlPlane
	W0729 11:51:44.509250   69907 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:51:44.509278   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:51:45.181697   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.682106   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.979275   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.979316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.254060   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:45.254100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:45.269592   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:45.269630   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:45.357469   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:45.357493   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:45.357508   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:45.437760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:45.437806   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:47.980407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:47.998789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:47.998874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:48.039278   70480 cri.go:89] found id: ""
	I0729 11:51:48.039311   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.039321   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:48.039328   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:48.039391   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:48.080277   70480 cri.go:89] found id: ""
	I0729 11:51:48.080312   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.080324   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:48.080332   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:48.080395   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:48.119009   70480 cri.go:89] found id: ""
	I0729 11:51:48.119032   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.119039   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:48.119045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:48.119091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:48.162062   70480 cri.go:89] found id: ""
	I0729 11:51:48.162091   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.162101   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:48.162108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:48.162175   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:48.208089   70480 cri.go:89] found id: ""
	I0729 11:51:48.208120   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.208141   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:48.208148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:48.208214   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:48.254174   70480 cri.go:89] found id: ""
	I0729 11:51:48.254206   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.254217   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:48.254225   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:48.254288   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:48.294959   70480 cri.go:89] found id: ""
	I0729 11:51:48.294988   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.294998   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:48.295005   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:48.295067   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:48.337176   70480 cri.go:89] found id: ""
	I0729 11:51:48.337209   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.337221   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:48.337231   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:48.337249   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:48.392826   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:48.392879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:48.409017   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:48.409043   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:48.489964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:48.489995   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:48.490012   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:48.571448   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:48.571496   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:50.180914   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.181136   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:50.479880   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.977753   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:54.978456   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:51.124524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:51.137808   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:51.137887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:51.178622   70480 cri.go:89] found id: ""
	I0729 11:51:51.178647   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.178656   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:51.178663   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:51.178738   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:51.214985   70480 cri.go:89] found id: ""
	I0729 11:51:51.215008   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.215015   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:51.215021   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:51.215071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:51.250529   70480 cri.go:89] found id: ""
	I0729 11:51:51.250575   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.250586   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:51.250594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:51.250648   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:51.284745   70480 cri.go:89] found id: ""
	I0729 11:51:51.284774   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.284781   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:51.284787   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:51.284844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:51.319448   70480 cri.go:89] found id: ""
	I0729 11:51:51.319476   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.319486   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:51.319494   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:51.319559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:51.358828   70480 cri.go:89] found id: ""
	I0729 11:51:51.358861   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.358868   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:51.358875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:51.358934   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:51.398326   70480 cri.go:89] found id: ""
	I0729 11:51:51.398356   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.398363   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:51.398369   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:51.398424   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:51.450495   70480 cri.go:89] found id: ""
	I0729 11:51:51.450523   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.450530   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:51.450539   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:51.450549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:51.495351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:51.495381   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:51.545937   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:51.545972   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:51.560738   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:51.560769   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:51.640187   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:51.640209   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:51.640223   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.230801   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:54.244231   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:54.244307   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:54.279319   70480 cri.go:89] found id: ""
	I0729 11:51:54.279349   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.279359   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:54.279366   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:54.279426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:54.316648   70480 cri.go:89] found id: ""
	I0729 11:51:54.316675   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.316685   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:54.316691   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:54.316751   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:54.353605   70480 cri.go:89] found id: ""
	I0729 11:51:54.353631   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.353641   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:54.353646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:54.353705   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:54.389695   70480 cri.go:89] found id: ""
	I0729 11:51:54.389724   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.389734   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:54.389739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:54.389789   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:54.425572   70480 cri.go:89] found id: ""
	I0729 11:51:54.425610   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.425620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:54.425627   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:54.425693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:54.466041   70480 cri.go:89] found id: ""
	I0729 11:51:54.466084   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.466101   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:54.466111   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:54.466160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:54.504916   70480 cri.go:89] found id: ""
	I0729 11:51:54.504943   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.504950   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:54.504956   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:54.505003   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:54.542093   70480 cri.go:89] found id: ""
	I0729 11:51:54.542133   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.542141   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:54.542149   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:54.542161   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:54.555521   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:54.555549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:54.639080   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:54.639104   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:54.639115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.721817   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:54.721858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:54.760794   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:54.760831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:54.681184   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.179812   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.180919   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:56.978928   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.479018   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.314549   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:57.328865   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:57.328941   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:57.364546   70480 cri.go:89] found id: ""
	I0729 11:51:57.364577   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.364587   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:57.364594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:57.364665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:57.401965   70480 cri.go:89] found id: ""
	I0729 11:51:57.401994   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.402005   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:57.402013   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:57.402072   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:57.436918   70480 cri.go:89] found id: ""
	I0729 11:51:57.436942   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.436975   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:57.436983   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:57.437042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:57.471476   70480 cri.go:89] found id: ""
	I0729 11:51:57.471503   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.471511   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:57.471519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:57.471576   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:57.509934   70480 cri.go:89] found id: ""
	I0729 11:51:57.509962   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.509972   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:57.509980   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:57.510038   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:57.552095   70480 cri.go:89] found id: ""
	I0729 11:51:57.552123   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.552133   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:57.552141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:57.552204   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:57.586486   70480 cri.go:89] found id: ""
	I0729 11:51:57.586507   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.586514   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:57.586519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:57.586580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:57.622708   70480 cri.go:89] found id: ""
	I0729 11:51:57.622737   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.622746   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:57.622757   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:57.622771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:57.637102   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:57.637133   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:57.710960   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:57.710981   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:57.710994   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:57.803522   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:57.803559   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:57.845804   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:57.845838   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:01.680142   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.682844   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:01.978739   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.973441   70231 pod_ready.go:81] duration metric: took 4m0.000922355s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:03.973469   70231 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:03.973488   70231 pod_ready.go:38] duration metric: took 4m6.983171556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:03.973523   70231 kubeadm.go:597] duration metric: took 4m14.830269847s to restartPrimaryControlPlane
	W0729 11:52:03.973614   70231 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:03.973646   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:00.398227   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:00.412064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:00.412139   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:00.446661   70480 cri.go:89] found id: ""
	I0729 11:52:00.446716   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.446729   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:00.446741   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:00.446793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:00.482234   70480 cri.go:89] found id: ""
	I0729 11:52:00.482260   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.482270   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:00.482290   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:00.482357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:00.520087   70480 cri.go:89] found id: ""
	I0729 11:52:00.520125   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.520136   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:00.520143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:00.520203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:00.556889   70480 cri.go:89] found id: ""
	I0729 11:52:00.556913   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.556924   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:00.556931   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:00.556996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:00.593521   70480 cri.go:89] found id: ""
	I0729 11:52:00.593559   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.593569   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:00.593577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:00.593644   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:00.631849   70480 cri.go:89] found id: ""
	I0729 11:52:00.631879   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.631889   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:00.631897   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:00.631956   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:00.669206   70480 cri.go:89] found id: ""
	I0729 11:52:00.669235   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.669246   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:00.669254   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:00.669314   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:00.707653   70480 cri.go:89] found id: ""
	I0729 11:52:00.707681   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.707692   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:00.707702   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:00.707720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:00.722275   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:00.722305   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:00.796212   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:00.796240   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:00.796254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:00.882088   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:00.882135   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:00.924217   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:00.924248   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.479929   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:03.493148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:03.493211   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:03.528114   70480 cri.go:89] found id: ""
	I0729 11:52:03.528158   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.528169   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:03.528177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:03.528233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:03.564509   70480 cri.go:89] found id: ""
	I0729 11:52:03.564542   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.564552   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:03.564559   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:03.564628   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:03.599850   70480 cri.go:89] found id: ""
	I0729 11:52:03.599884   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.599897   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:03.599913   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:03.599977   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:03.638988   70480 cri.go:89] found id: ""
	I0729 11:52:03.639020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.639031   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:03.639039   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:03.639112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:03.678544   70480 cri.go:89] found id: ""
	I0729 11:52:03.678572   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.678581   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:03.678589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:03.678651   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:03.717265   70480 cri.go:89] found id: ""
	I0729 11:52:03.717297   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.717307   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:03.717314   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:03.717379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:03.756479   70480 cri.go:89] found id: ""
	I0729 11:52:03.756504   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.756512   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:03.756519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:03.756570   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:03.791992   70480 cri.go:89] found id: ""
	I0729 11:52:03.792020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.792031   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:03.792042   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:03.792056   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:03.881378   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:03.881417   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:03.918866   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:03.918902   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.972005   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:03.972041   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:03.989650   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:03.989680   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:04.069522   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:06.182277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:08.681543   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:06.570567   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:06.588524   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:06.588594   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:06.627028   70480 cri.go:89] found id: ""
	I0729 11:52:06.627071   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.627082   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:06.627089   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:06.627154   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:06.669504   70480 cri.go:89] found id: ""
	I0729 11:52:06.669536   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.669548   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:06.669558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:06.669620   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:06.709546   70480 cri.go:89] found id: ""
	I0729 11:52:06.709573   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.709593   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:06.709603   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:06.709661   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:06.745539   70480 cri.go:89] found id: ""
	I0729 11:52:06.745568   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.745577   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:06.745585   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:06.745645   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:06.783924   70480 cri.go:89] found id: ""
	I0729 11:52:06.783960   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.783971   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:06.783978   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:06.784040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:06.819838   70480 cri.go:89] found id: ""
	I0729 11:52:06.819869   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.819879   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:06.819886   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:06.819950   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:06.858057   70480 cri.go:89] found id: ""
	I0729 11:52:06.858085   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.858102   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:06.858110   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:06.858186   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:06.896844   70480 cri.go:89] found id: ""
	I0729 11:52:06.896875   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.896885   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:06.896896   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:06.896911   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:06.953126   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:06.953166   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:06.967548   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:06.967579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:07.046716   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:07.046740   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:07.046756   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:07.129264   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:07.129299   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:09.672314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:09.687133   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:09.687202   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:09.728280   70480 cri.go:89] found id: ""
	I0729 11:52:09.728307   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.728316   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:09.728322   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:09.728379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:09.765178   70480 cri.go:89] found id: ""
	I0729 11:52:09.765214   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.765225   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:09.765233   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:09.765293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:09.801181   70480 cri.go:89] found id: ""
	I0729 11:52:09.801216   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.801225   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:09.801233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:09.801294   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:09.845088   70480 cri.go:89] found id: ""
	I0729 11:52:09.845118   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.845129   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:09.845137   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:09.845198   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:09.883874   70480 cri.go:89] found id: ""
	I0729 11:52:09.883907   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.883918   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:09.883925   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:09.883992   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:09.918273   70480 cri.go:89] found id: ""
	I0729 11:52:09.918302   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.918312   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:09.918321   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:09.918386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:09.978461   70480 cri.go:89] found id: ""
	I0729 11:52:09.978487   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.978494   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:09.978500   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:09.978546   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:10.022219   70480 cri.go:89] found id: ""
	I0729 11:52:10.022247   70480 logs.go:276] 0 containers: []
	W0729 11:52:10.022255   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:10.022264   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:10.022274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:10.076181   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:10.076228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:10.090567   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:10.090600   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:10.159576   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:10.159605   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:10.159620   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:11.181276   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:13.181424   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:10.242116   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:10.242165   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:12.784286   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:12.798015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:12.798111   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:12.834920   70480 cri.go:89] found id: ""
	I0729 11:52:12.834951   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.834962   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:12.834969   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:12.835030   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:12.876545   70480 cri.go:89] found id: ""
	I0729 11:52:12.876578   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.876589   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:12.876596   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:12.876665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:12.912912   70480 cri.go:89] found id: ""
	I0729 11:52:12.912937   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.912944   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:12.912950   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:12.913006   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:12.948118   70480 cri.go:89] found id: ""
	I0729 11:52:12.948148   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.948156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:12.948161   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:12.948207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:12.984339   70480 cri.go:89] found id: ""
	I0729 11:52:12.984364   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.984371   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:12.984377   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:12.984433   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:13.024870   70480 cri.go:89] found id: ""
	I0729 11:52:13.024906   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.024916   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:13.024924   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:13.024989   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:13.067951   70480 cri.go:89] found id: ""
	I0729 11:52:13.067988   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.067999   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:13.068007   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:13.068068   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:13.105095   70480 cri.go:89] found id: ""
	I0729 11:52:13.105126   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.105136   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:13.105144   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:13.105158   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:13.167486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:13.167537   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:13.183020   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:13.183046   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:13.264702   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:13.264734   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:13.264750   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:13.346760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:13.346795   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:15.887849   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:15.901560   70480 kubeadm.go:597] duration metric: took 4m4.683363388s to restartPrimaryControlPlane
	W0729 11:52:15.901628   70480 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:15.901659   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:16.377916   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.396247   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.408224   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.420998   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.421024   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.421073   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.432646   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.432712   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.444442   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.455098   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.455175   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.469621   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.482797   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.482869   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.495535   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.508873   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.508967   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.519797   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.590877   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:52:16.590933   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.768006   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.768167   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.768313   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:16.980586   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:16.523230   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.013927797s)
	I0729 11:52:16.523296   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.541674   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.553585   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.565171   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.565196   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.565237   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.575919   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.576023   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.588641   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.599947   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.600028   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.612623   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.624420   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.624486   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.639271   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.649979   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.650057   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.661423   69907 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.718013   69907 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:16.718138   69907 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.870793   69907 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.870955   69907 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.871090   69907 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:17.100094   69907 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:17.101792   69907 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:17.101895   69907 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:17.101999   69907 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:17.102129   69907 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:17.102237   69907 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:17.102339   69907 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:17.102419   69907 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:17.102523   69907 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:17.102607   69907 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:17.102731   69907 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:17.103613   69907 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:17.103841   69907 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:17.103923   69907 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.439592   69907 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.517503   69907 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:17.731672   69907 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.877789   69907 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.930274   69907 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.930777   69907 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:17.933362   69907 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:16.982617   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:16.982732   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:16.982826   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:16.982935   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:16.983015   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:16.983079   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:16.983127   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:16.983180   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:16.983230   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:16.983291   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:16.983354   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:16.983386   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:16.983433   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.523710   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.622636   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.732508   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.869921   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.892581   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.893213   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.893300   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.049043   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:17.935629   69907 out.go:204]   - Booting up control plane ...
	I0729 11:52:17.935753   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:17.935870   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:17.935955   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:17.961756   69907 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.962814   69907 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.962879   69907 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.102662   69907 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:18.102806   69907 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:15.181970   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:17.682108   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:18.051012   70480 out.go:204]   - Booting up control plane ...
	I0729 11:52:18.051192   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:18.058194   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:18.062340   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:18.063509   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:18.066121   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:52:19.116356   69907 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.010567801s
	I0729 11:52:19.116461   69907 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:24.118059   69907 kubeadm.go:310] [api-check] The API server is healthy after 5.002510977s
	I0729 11:52:24.132586   69907 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:24.148251   69907 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:24.188769   69907 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:24.188956   69907 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-731235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:24.205790   69907 kubeadm.go:310] [bootstrap-token] Using token: pvm7ux.41geojc66jibd993
	I0729 11:52:20.181703   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:22.181889   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.182317   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.207334   69907 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:24.207519   69907 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:24.213637   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:24.226771   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:24.231379   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:24.239349   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:24.248803   69907 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:24.524966   69907 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:24.961557   69907 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:25.522876   69907 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:25.523985   69907 kubeadm.go:310] 
	I0729 11:52:25.524083   69907 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:25.524093   69907 kubeadm.go:310] 
	I0729 11:52:25.524203   69907 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:25.524234   69907 kubeadm.go:310] 
	I0729 11:52:25.524273   69907 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:25.524353   69907 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:25.524441   69907 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:25.524460   69907 kubeadm.go:310] 
	I0729 11:52:25.524520   69907 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:25.524527   69907 kubeadm.go:310] 
	I0729 11:52:25.524578   69907 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:25.524584   69907 kubeadm.go:310] 
	I0729 11:52:25.524625   69907 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:25.524728   69907 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:25.524834   69907 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:25.524843   69907 kubeadm.go:310] 
	I0729 11:52:25.524957   69907 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:25.525047   69907 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:25.525054   69907 kubeadm.go:310] 
	I0729 11:52:25.525175   69907 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525314   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:25.525343   69907 kubeadm.go:310] 	--control-plane 
	I0729 11:52:25.525351   69907 kubeadm.go:310] 
	I0729 11:52:25.525449   69907 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:25.525463   69907 kubeadm.go:310] 
	I0729 11:52:25.525569   69907 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525709   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:25.526283   69907 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:25.526361   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:52:25.526378   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:25.528362   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:25.529726   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:25.546760   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:25.571336   69907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:25.571457   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-731235 minikube.k8s.io/updated_at=2024_07_29T11_52_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=embed-certs-731235 minikube.k8s.io/primary=true
	I0729 11:52:25.571460   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:25.600643   69907 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:25.771231   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.271938   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.771337   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.271880   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.772276   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.271327   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.771854   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.680959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.180277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.271904   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:29.771958   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.271342   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.771316   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.271539   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.771490   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.271537   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.771969   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.271498   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.771963   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.681002   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.180450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.271709   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:34.771968   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.271985   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.771798   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.271877   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.771950   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.271225   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.771622   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.271354   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.369678   69907 kubeadm.go:1113] duration metric: took 12.798280829s to wait for elevateKubeSystemPrivileges
	I0729 11:52:38.369716   69907 kubeadm.go:394] duration metric: took 5m10.090728575s to StartCluster
	I0729 11:52:38.369737   69907 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.369812   69907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:38.371527   69907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.371774   69907 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:38.371829   69907 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:38.371904   69907 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-731235"
	I0729 11:52:38.371925   69907 addons.go:69] Setting default-storageclass=true in profile "embed-certs-731235"
	I0729 11:52:38.371956   69907 addons.go:69] Setting metrics-server=true in profile "embed-certs-731235"
	I0729 11:52:38.371977   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:38.371991   69907 addons.go:234] Setting addon metrics-server=true in "embed-certs-731235"
	W0729 11:52:38.371999   69907 addons.go:243] addon metrics-server should already be in state true
	I0729 11:52:38.372041   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.371966   69907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-731235"
	I0729 11:52:38.371936   69907 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-731235"
	W0729 11:52:38.372204   69907 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:38.372240   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.372365   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372402   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372460   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372615   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372662   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.373455   69907 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:38.374757   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:38.388333   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0729 11:52:38.388901   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.389443   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.389467   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.389661   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0729 11:52:38.389858   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.390469   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.390499   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.390717   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.391258   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.391278   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.391622   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0729 11:52:38.391655   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.391937   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.391966   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.392511   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.392538   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.392904   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.393459   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.393491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.395933   69907 addons.go:234] Setting addon default-storageclass=true in "embed-certs-731235"
	W0729 11:52:38.395953   69907 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:38.395980   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.396342   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.396371   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.411784   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0729 11:52:38.412254   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.412549   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34211
	I0729 11:52:38.412811   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.412831   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.412911   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.413173   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413340   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.413470   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.413488   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.413830   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413997   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.414897   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0729 11:52:38.415312   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.415395   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.415753   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.415772   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.415918   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.416126   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.416663   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.416690   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.418043   69907 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:38.418047   69907 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:38.419620   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:38.419640   69907 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:38.419661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.419693   69907 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:38.419702   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:38.419714   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.423646   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424115   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424184   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424208   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424370   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.424573   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.424631   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424647   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424722   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.424821   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.425101   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.425266   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.425394   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.425528   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.432777   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0729 11:52:38.433219   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.433735   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.433759   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.434121   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.434299   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.435957   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.436176   69907 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.436195   69907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:38.436216   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.438989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439431   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.439508   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439627   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.439783   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.439929   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.440077   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.598513   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:38.627199   69907 node_ready.go:35] waiting up to 6m0s for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639168   69907 node_ready.go:49] node "embed-certs-731235" has status "Ready":"True"
	I0729 11:52:38.639199   69907 node_ready.go:38] duration metric: took 11.953793ms for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639208   69907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:38.644562   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:38.678019   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:38.678042   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:38.706214   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:52:38.706247   69907 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:52:38.745796   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.745824   69907 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:38.767879   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.778016   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.790742   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:36.181329   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:38.183254   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:39.974095   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196041477s)
	I0729 11:52:39.974096   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206172307s)
	I0729 11:52:39.974194   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974247   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974203   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974345   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974811   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974831   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974840   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974847   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974857   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.974925   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974938   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974946   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974955   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.975075   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.975165   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.975244   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976561   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.976579   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976577   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.976589   69907 addons.go:475] Verifying addon metrics-server=true in "embed-certs-731235"
	I0729 11:52:39.999773   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.999799   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.000097   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.000118   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.026995   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.236214166s)
	I0729 11:52:40.027052   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027063   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027383   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.027402   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.027412   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027422   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027387   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029105   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.029109   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029124   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.031066   69907 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner
	I0729 11:52:36.127977   70231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.15430735s)
	I0729 11:52:36.128057   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:36.147540   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:36.159519   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:36.171332   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:36.171353   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:36.171406   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:52:36.182915   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:36.183084   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:36.193912   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:52:36.203972   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:36.204036   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:36.213886   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.223205   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:36.223260   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.235379   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:52:36.245392   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:36.245461   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:36.255495   70231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:36.468759   70231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:40.032797   69907 addons.go:510] duration metric: took 1.660964221s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner]
	I0729 11:52:40.654126   69907 pod_ready.go:102] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:41.173676   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.173708   69907 pod_ready.go:81] duration metric: took 2.529122203s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.173721   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183179   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.183207   69907 pod_ready.go:81] duration metric: took 9.478224ms for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183220   69907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192149   69907 pod_ready.go:92] pod "etcd-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.192177   69907 pod_ready.go:81] duration metric: took 8.949045ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192189   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199322   69907 pod_ready.go:92] pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.199347   69907 pod_ready.go:81] duration metric: took 7.150124ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199360   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210464   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.210491   69907 pod_ready.go:81] duration metric: took 11.123649ms for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210504   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549786   69907 pod_ready.go:92] pod "kube-proxy-ch48n" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.549814   69907 pod_ready.go:81] duration metric: took 339.30332ms for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549828   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949607   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.949629   69907 pod_ready.go:81] duration metric: took 399.794484ms for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949637   69907 pod_ready.go:38] duration metric: took 3.310420523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:41.949650   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:52:41.949732   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:41.967899   69907 api_server.go:72] duration metric: took 3.596093405s to wait for apiserver process to appear ...
	I0729 11:52:41.967933   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:52:41.967957   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:52:41.973064   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:52:41.974128   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:52:41.974151   69907 api_server.go:131] duration metric: took 6.211514ms to wait for apiserver health ...
	I0729 11:52:41.974158   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:52:42.152607   69907 system_pods.go:59] 9 kube-system pods found
	I0729 11:52:42.152648   69907 system_pods.go:61] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.152656   69907 system_pods.go:61] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.152663   69907 system_pods.go:61] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.152670   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.152674   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.152680   69907 system_pods.go:61] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.152685   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.152694   69907 system_pods.go:61] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.152702   69907 system_pods.go:61] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.152714   69907 system_pods.go:74] duration metric: took 178.548453ms to wait for pod list to return data ...
	I0729 11:52:42.152728   69907 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:52:42.349148   69907 default_sa.go:45] found service account: "default"
	I0729 11:52:42.349182   69907 default_sa.go:55] duration metric: took 196.446704ms for default service account to be created ...
	I0729 11:52:42.349192   69907 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:52:42.552384   69907 system_pods.go:86] 9 kube-system pods found
	I0729 11:52:42.552416   69907 system_pods.go:89] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.552425   69907 system_pods.go:89] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.552431   69907 system_pods.go:89] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.552437   69907 system_pods.go:89] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.552442   69907 system_pods.go:89] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.552448   69907 system_pods.go:89] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.552453   69907 system_pods.go:89] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.552462   69907 system_pods.go:89] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.552472   69907 system_pods.go:89] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.552483   69907 system_pods.go:126] duration metric: took 203.284903ms to wait for k8s-apps to be running ...
	I0729 11:52:42.552492   69907 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:52:42.552546   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:42.569158   69907 system_svc.go:56] duration metric: took 16.657226ms WaitForService to wait for kubelet
	I0729 11:52:42.569186   69907 kubeadm.go:582] duration metric: took 4.19738713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:52:42.569205   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:52:42.749356   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:52:42.749385   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:52:42.749399   69907 node_conditions.go:105] duration metric: took 180.189313ms to run NodePressure ...
	I0729 11:52:42.749411   69907 start.go:241] waiting for startup goroutines ...
	I0729 11:52:42.749417   69907 start.go:246] waiting for cluster config update ...
	I0729 11:52:42.749427   69907 start.go:255] writing updated cluster config ...
	I0729 11:52:42.749672   69907 ssh_runner.go:195] Run: rm -f paused
	I0729 11:52:42.807579   69907 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:52:42.809609   69907 out.go:177] * Done! kubectl is now configured to use "embed-certs-731235" cluster and "default" namespace by default
	I0729 11:52:40.681693   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:42.685146   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.646240   70231 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:46.646305   70231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:46.646407   70231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:46.646537   70231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:46.646653   70231 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:46.646749   70231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:46.648483   70231 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:46.648572   70231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:46.648626   70231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:46.648719   70231 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:46.648820   70231 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:46.648941   70231 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:46.649013   70231 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:46.649068   70231 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:46.649121   70231 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:46.649182   70231 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:46.649248   70231 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:46.649294   70231 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:46.649378   70231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:46.649455   70231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:46.649529   70231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:46.649609   70231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:46.649693   70231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:46.649778   70231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:46.649912   70231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:46.650023   70231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:46.651575   70231 out.go:204]   - Booting up control plane ...
	I0729 11:52:46.651657   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:46.651723   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:46.651793   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:46.651893   70231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:46.651963   70231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:46.651996   70231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:46.652155   70231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:46.652258   70231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:46.652315   70231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00230111s
	I0729 11:52:46.652381   70231 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:46.652444   70231 kubeadm.go:310] [api-check] The API server is healthy after 5.502783682s
	I0729 11:52:46.652588   70231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:46.652734   70231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:46.652802   70231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:46.652991   70231 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-754486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:46.653041   70231 kubeadm.go:310] [bootstrap-token] Using token: 341fdm.tm8thttie16wi2qy
	I0729 11:52:46.654343   70231 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:46.654458   70231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:46.654555   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:46.654745   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:46.654914   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:46.655023   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:46.655094   70231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:46.655202   70231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:46.655242   70231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:46.655285   70231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:46.655293   70231 kubeadm.go:310] 
	I0729 11:52:46.655349   70231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:46.655355   70231 kubeadm.go:310] 
	I0729 11:52:46.655427   70231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:46.655433   70231 kubeadm.go:310] 
	I0729 11:52:46.655453   70231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:46.655509   70231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:46.655576   70231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:46.655586   70231 kubeadm.go:310] 
	I0729 11:52:46.655653   70231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:46.655660   70231 kubeadm.go:310] 
	I0729 11:52:46.655702   70231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:46.655708   70231 kubeadm.go:310] 
	I0729 11:52:46.655772   70231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:46.655861   70231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:46.655975   70231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:46.656000   70231 kubeadm.go:310] 
	I0729 11:52:46.656118   70231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:46.656223   70231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:46.656233   70231 kubeadm.go:310] 
	I0729 11:52:46.656344   70231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656477   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:46.656502   70231 kubeadm.go:310] 	--control-plane 
	I0729 11:52:46.656508   70231 kubeadm.go:310] 
	I0729 11:52:46.656580   70231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:46.656586   70231 kubeadm.go:310] 
	I0729 11:52:46.656669   70231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656831   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:46.656851   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:52:46.656862   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:46.659007   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:45.180215   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:47.181213   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.660238   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:46.671866   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:46.692991   70231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-754486 minikube.k8s.io/updated_at=2024_07_29T11_52_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=default-k8s-diff-port-754486 minikube.k8s.io/primary=true
	I0729 11:52:46.897228   70231 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:46.897373   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.398474   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.898225   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.397547   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.897716   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.398393   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.898110   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.680176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:51.680900   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:53.681105   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:50.397646   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:50.897618   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.398130   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.897444   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.398334   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.898233   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.397587   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.898255   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.397634   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.898138   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.182828   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:56.674072   69419 pod_ready.go:81] duration metric: took 4m0.000131876s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:56.674094   69419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:56.674113   69419 pod_ready.go:38] duration metric: took 4m9.054741116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:56.674144   69419 kubeadm.go:597] duration metric: took 4m16.587842765s to restartPrimaryControlPlane
	W0729 11:52:56.674197   69419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:56.674234   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:55.398096   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:55.897565   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.397785   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.897860   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.397925   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.897989   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.397500   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.897468   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.398228   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.483894   70231 kubeadm.go:1113] duration metric: took 12.790894124s to wait for elevateKubeSystemPrivileges
	I0729 11:52:59.483924   70231 kubeadm.go:394] duration metric: took 5m10.397319925s to StartCluster
	I0729 11:52:59.483941   70231 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.484019   70231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:59.485737   70231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.486008   70231 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:59.486074   70231 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:59.486163   70231 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486195   70231 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754486"
	I0729 11:52:59.486196   70231 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486210   70231 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486238   70231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754486"
	I0729 11:52:59.486251   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:59.486256   70231 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.486266   70231 addons.go:243] addon metrics-server should already be in state true
	W0729 11:52:59.486205   70231 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:59.486295   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486307   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486550   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486555   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486572   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486573   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486617   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.487888   70231 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:59.489501   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:59.502095   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0729 11:52:59.502614   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.502832   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0729 11:52:59.503207   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503229   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.503252   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.503805   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503829   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.504128   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504216   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504317   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.504801   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.504847   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.505348   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I0729 11:52:59.505701   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.506318   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.506342   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.506738   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.507261   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.507290   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.508065   70231 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.508084   70231 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:59.508111   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.508423   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.508462   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.526240   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0729 11:52:59.526269   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0729 11:52:59.526313   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0729 11:52:59.526654   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526763   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526826   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.527214   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527230   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527351   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527388   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527405   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527429   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527668   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527715   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527901   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.527931   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.528030   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.528913   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.528940   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.529836   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.530004   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.532077   70231 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:59.532101   70231 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:59.533597   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:59.533619   70231 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:59.533641   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.533645   70231 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:59.533659   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:59.533681   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.538047   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538082   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538654   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538669   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538679   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538686   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538693   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538864   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538889   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539065   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539239   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539237   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.539374   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.546505   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0729 11:52:59.546918   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.547428   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.547455   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.547790   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.548011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.549607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.549899   70231 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.549915   70231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:59.549934   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.553591   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.555251   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.555814   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.556005   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.556154   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.758973   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:59.809677   70231 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818208   70231 node_ready.go:49] node "default-k8s-diff-port-754486" has status "Ready":"True"
	I0729 11:52:59.818252   70231 node_ready.go:38] duration metric: took 8.523612ms for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818264   70231 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:59.825340   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:59.935053   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.954324   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:59.954346   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:59.962991   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:00.052728   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:00.052754   70231 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:00.168588   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.168620   70231 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:58.067350   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:52:58.067472   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:52:58.067690   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:00.230134   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.485028   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485062   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485424   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485447   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.485461   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485470   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485716   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485731   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.502040   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.502061   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.502386   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.502410   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.400774   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.437744399s)
	I0729 11:53:01.400842   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.400856   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401229   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401248   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.401284   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.401378   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.401387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401637   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401648   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408496   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.178316081s)
	I0729 11:53:01.408558   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408577   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.408859   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.408879   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408859   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.408904   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408917   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.409181   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.409218   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.409232   70231 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754486"
	I0729 11:53:01.409254   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.411682   70231 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 11:53:01.413048   70231 addons.go:510] duration metric: took 1.926975712s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 11:53:01.831515   70231 pod_ready.go:102] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:02.331492   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.331518   70231 pod_ready.go:81] duration metric: took 2.506145957s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.331530   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341152   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.341175   70231 pod_ready.go:81] duration metric: took 9.638268ms for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341183   70231 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346946   70231 pod_ready.go:92] pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.346971   70231 pod_ready.go:81] duration metric: took 5.77844ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346981   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351401   70231 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.351423   70231 pod_ready.go:81] duration metric: took 4.432109ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351435   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355410   70231 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.355428   70231 pod_ready.go:81] duration metric: took 3.986166ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355439   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729604   70231 pod_ready.go:92] pod "kube-proxy-7gkd8" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.729634   70231 pod_ready.go:81] duration metric: took 374.188296ms for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729653   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130027   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:03.130052   70231 pod_ready.go:81] duration metric: took 400.392433ms for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130061   70231 pod_ready.go:38] duration metric: took 3.311785643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:03.130077   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:03.130134   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:03.152134   70231 api_server.go:72] duration metric: took 3.666086394s to wait for apiserver process to appear ...
	I0729 11:53:03.152164   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:03.152188   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:53:03.157357   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:53:03.158235   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:53:03.158254   70231 api_server.go:131] duration metric: took 6.083486ms to wait for apiserver health ...
	I0729 11:53:03.158261   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:03.333517   70231 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:03.333547   70231 system_pods.go:61] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.333552   70231 system_pods.go:61] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.333556   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.333559   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.333563   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.333566   70231 system_pods.go:61] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.333568   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.333574   70231 system_pods.go:61] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.333577   70231 system_pods.go:61] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.333586   70231 system_pods.go:74] duration metric: took 175.319992ms to wait for pod list to return data ...
	I0729 11:53:03.333595   70231 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:03.529964   70231 default_sa.go:45] found service account: "default"
	I0729 11:53:03.529989   70231 default_sa.go:55] duration metric: took 196.388041ms for default service account to be created ...
	I0729 11:53:03.529998   70231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:03.733015   70231 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:03.733051   70231 system_pods.go:89] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.733058   70231 system_pods.go:89] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.733062   70231 system_pods.go:89] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.733066   70231 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.733070   70231 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.733075   70231 system_pods.go:89] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.733081   70231 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.733090   70231 system_pods.go:89] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.733097   70231 system_pods.go:89] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.733108   70231 system_pods.go:126] duration metric: took 203.104097ms to wait for k8s-apps to be running ...
	I0729 11:53:03.733121   70231 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:03.733165   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:03.749014   70231 system_svc.go:56] duration metric: took 15.886799ms WaitForService to wait for kubelet
	I0729 11:53:03.749045   70231 kubeadm.go:582] duration metric: took 4.263001752s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:03.749070   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:03.930356   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:03.930380   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:03.930390   70231 node_conditions.go:105] duration metric: took 181.31486ms to run NodePressure ...
	I0729 11:53:03.930399   70231 start.go:241] waiting for startup goroutines ...
	I0729 11:53:03.930406   70231 start.go:246] waiting for cluster config update ...
	I0729 11:53:03.930417   70231 start.go:255] writing updated cluster config ...
	I0729 11:53:03.930690   70231 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:03.984862   70231 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:53:03.986829   70231 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754486" cluster and "default" namespace by default
	I0729 11:53:03.068218   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:03.068464   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:13.068777   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:13.069011   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:23.088658   69419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.414400207s)
	I0729 11:53:23.088743   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:23.104735   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:53:23.115145   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:53:23.125890   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:53:23.125913   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:53:23.125969   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:53:23.136854   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:53:23.136914   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:53:23.148400   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:53:23.157595   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:53:23.157670   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:53:23.167281   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.177119   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:53:23.177176   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.187359   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:53:23.197033   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:53:23.197110   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:53:23.207490   69419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:53:23.254112   69419 kubeadm.go:310] W0729 11:53:23.231768    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.254983   69419 kubeadm.go:310] W0729 11:53:23.232599    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.383993   69419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:53:32.410305   69419 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 11:53:32.410378   69419 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:53:32.410483   69419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:53:32.410611   69419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:53:32.410758   69419 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 11:53:32.410840   69419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:53:32.412547   69419 out.go:204]   - Generating certificates and keys ...
	I0729 11:53:32.412651   69419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:53:32.412761   69419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:53:32.412879   69419 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:53:32.412973   69419 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:53:32.413101   69419 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:53:32.413176   69419 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:53:32.413228   69419 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:53:32.413279   69419 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:53:32.413346   69419 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:53:32.413427   69419 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:53:32.413482   69419 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:53:32.413577   69419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:53:32.413644   69419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:53:32.413717   69419 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:53:32.413795   69419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:53:32.413880   69419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:53:32.413970   69419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:53:32.414075   69419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:53:32.414167   69419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:53:32.415701   69419 out.go:204]   - Booting up control plane ...
	I0729 11:53:32.415817   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:53:32.415927   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:53:32.416034   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:53:32.416205   69419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:53:32.416312   69419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:53:32.416350   69419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:53:32.416466   69419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:53:32.416564   69419 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:53:32.416658   69419 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.786281ms
	I0729 11:53:32.416730   69419 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:53:32.416803   69419 kubeadm.go:310] [api-check] The API server is healthy after 5.501546935s
	I0729 11:53:32.416941   69419 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:53:32.417099   69419 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:53:32.417184   69419 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:53:32.417349   69419 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-297799 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:53:32.417434   69419 kubeadm.go:310] [bootstrap-token] Using token: 9fg92x.rq4eihzyqcflv0gj
	I0729 11:53:32.418783   69419 out.go:204]   - Configuring RBAC rules ...
	I0729 11:53:32.418899   69419 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:53:32.418969   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:53:32.419100   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:53:32.419239   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:53:32.419337   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:53:32.419423   69419 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:53:32.419544   69419 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:53:32.419594   69419 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:53:32.419633   69419 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:53:32.419639   69419 kubeadm.go:310] 
	I0729 11:53:32.419686   69419 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:53:32.419695   69419 kubeadm.go:310] 
	I0729 11:53:32.419756   69419 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:53:32.419762   69419 kubeadm.go:310] 
	I0729 11:53:32.419802   69419 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:53:32.419858   69419 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:53:32.419901   69419 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:53:32.419911   69419 kubeadm.go:310] 
	I0729 11:53:32.419965   69419 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:53:32.419971   69419 kubeadm.go:310] 
	I0729 11:53:32.420017   69419 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:53:32.420025   69419 kubeadm.go:310] 
	I0729 11:53:32.420072   69419 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:53:32.420137   69419 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:53:32.420200   69419 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:53:32.420205   69419 kubeadm.go:310] 
	I0729 11:53:32.420277   69419 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:53:32.420340   69419 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:53:32.420345   69419 kubeadm.go:310] 
	I0729 11:53:32.420416   69419 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420506   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:53:32.420531   69419 kubeadm.go:310] 	--control-plane 
	I0729 11:53:32.420544   69419 kubeadm.go:310] 
	I0729 11:53:32.420645   69419 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:53:32.420654   69419 kubeadm.go:310] 
	I0729 11:53:32.420765   69419 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420895   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:53:32.420911   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:53:32.420920   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:53:32.422438   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:53:32.423731   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:53:32.435581   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:53:32.457560   69419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:53:32.457665   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:32.457719   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-297799 minikube.k8s.io/updated_at=2024_07_29T11_53_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=no-preload-297799 minikube.k8s.io/primary=true
	I0729 11:53:32.486072   69419 ops.go:34] apiserver oom_adj: -16
	I0729 11:53:32.674003   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.174011   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.674077   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:34.174383   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.069886   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:33.070112   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:34.674510   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.174124   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.674135   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.174420   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.674370   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.787916   69419 kubeadm.go:1113] duration metric: took 4.330303492s to wait for elevateKubeSystemPrivileges
	I0729 11:53:36.787961   69419 kubeadm.go:394] duration metric: took 4m56.766239734s to StartCluster
	I0729 11:53:36.787983   69419 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.788071   69419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:53:36.790440   69419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.790747   69419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:53:36.790823   69419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:53:36.790914   69419 addons.go:69] Setting storage-provisioner=true in profile "no-preload-297799"
	I0729 11:53:36.790929   69419 addons.go:69] Setting default-storageclass=true in profile "no-preload-297799"
	I0729 11:53:36.790946   69419 addons.go:234] Setting addon storage-provisioner=true in "no-preload-297799"
	W0729 11:53:36.790956   69419 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:53:36.790970   69419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-297799"
	I0729 11:53:36.790963   69419 addons.go:69] Setting metrics-server=true in profile "no-preload-297799"
	I0729 11:53:36.790990   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791009   69419 addons.go:234] Setting addon metrics-server=true in "no-preload-297799"
	W0729 11:53:36.791023   69419 addons.go:243] addon metrics-server should already be in state true
	I0729 11:53:36.790938   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:53:36.791055   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791315   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791350   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791376   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791395   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791424   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791403   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.792400   69419 out.go:177] * Verifying Kubernetes components...
	I0729 11:53:36.793837   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:53:36.807811   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0729 11:53:36.807845   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0729 11:53:36.808292   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808347   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808844   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808863   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.808971   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808992   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.809204   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809364   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809708   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809727   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.809868   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809903   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.810196   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33633
	I0729 11:53:36.810602   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.811069   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.811085   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.811578   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.811789   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.815254   69419 addons.go:234] Setting addon default-storageclass=true in "no-preload-297799"
	W0729 11:53:36.815319   69419 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:53:36.815351   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.815722   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.815767   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.826661   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I0729 11:53:36.827259   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.827925   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.827947   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.828288   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.828475   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.829152   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0729 11:53:36.829483   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.829942   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.829954   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.830335   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.830448   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.830512   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.831779   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0729 11:53:36.832366   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.832499   69419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:53:36.832831   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.832843   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.833105   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.833659   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.833692   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.834047   69419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:36.834218   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:53:36.834243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.835105   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.837003   69419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:53:36.837668   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838105   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.838130   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838304   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:53:36.838322   69419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:53:36.838340   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.838347   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.838505   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.838661   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.838834   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.841306   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841724   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.841742   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841909   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.842081   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.842243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.842405   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.853959   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0729 11:53:36.854349   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.854825   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.854849   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.855184   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.855412   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.857073   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.857352   69419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:36.857363   69419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:53:36.857377   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.860376   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860804   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.860826   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860973   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.861121   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.861249   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.861352   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:37.000840   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:53:37.058535   69419 node_ready.go:35] waiting up to 6m0s for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069231   69419 node_ready.go:49] node "no-preload-297799" has status "Ready":"True"
	I0729 11:53:37.069260   69419 node_ready.go:38] duration metric: took 10.69136ms for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069272   69419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:37.080726   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:37.122837   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:37.154216   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:37.177797   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:53:37.177821   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:53:37.298520   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:37.298546   69419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:37.410911   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:37.410935   69419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:53:37.502799   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:38.337421   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.214547185s)
	I0729 11:53:38.337457   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.183203433s)
	I0729 11:53:38.337490   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337491   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337500   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337506   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337775   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337790   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337800   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337807   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337843   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.337844   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337865   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337873   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337880   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.338007   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338016   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338091   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338102   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338108   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.417894   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.417921   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.418225   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.418250   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.418272   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642279   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139432943s)
	I0729 11:53:38.642328   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642343   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642656   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642677   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642680   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642687   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642712   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642956   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642975   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642985   69419 addons.go:475] Verifying addon metrics-server=true in "no-preload-297799"
	I0729 11:53:38.642990   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.644958   69419 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 11:53:38.646417   69419 addons.go:510] duration metric: took 1.855596723s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 11:53:39.091531   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:41.587827   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.088096   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.586486   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.586510   69419 pod_ready.go:81] duration metric: took 7.505759998s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.586521   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591372   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.591394   69419 pod_ready.go:81] duration metric: took 4.865716ms for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591404   69419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596377   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.596401   69419 pod_ready.go:81] duration metric: took 4.988985ms for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596412   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603151   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.603176   69419 pod_ready.go:81] duration metric: took 6.75609ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603187   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609494   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.609514   69419 pod_ready.go:81] duration metric: took 6.319727ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609526   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984476   69419 pod_ready.go:92] pod "kube-proxy-blx4g" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.984505   69419 pod_ready.go:81] duration metric: took 374.971379ms for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984517   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385763   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:45.385792   69419 pod_ready.go:81] duration metric: took 401.266749ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385802   69419 pod_ready.go:38] duration metric: took 8.316518469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:45.385821   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:45.385887   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:45.404065   69419 api_server.go:72] duration metric: took 8.613282557s to wait for apiserver process to appear ...
	I0729 11:53:45.404093   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:45.404114   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:53:45.408027   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:53:45.408985   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:53:45.409011   69419 api_server.go:131] duration metric: took 4.91124ms to wait for apiserver health ...
	I0729 11:53:45.409020   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:45.587520   69419 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:45.587552   69419 system_pods.go:61] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.587556   69419 system_pods.go:61] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.587560   69419 system_pods.go:61] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.587563   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.587568   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.587571   69419 system_pods.go:61] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.587574   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.587580   69419 system_pods.go:61] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.587584   69419 system_pods.go:61] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.587590   69419 system_pods.go:74] duration metric: took 178.563924ms to wait for pod list to return data ...
	I0729 11:53:45.587596   69419 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:45.784611   69419 default_sa.go:45] found service account: "default"
	I0729 11:53:45.784640   69419 default_sa.go:55] duration metric: took 197.037896ms for default service account to be created ...
	I0729 11:53:45.784659   69419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:45.992937   69419 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:45.992973   69419 system_pods.go:89] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.992982   69419 system_pods.go:89] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.992990   69419 system_pods.go:89] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.992996   69419 system_pods.go:89] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.993003   69419 system_pods.go:89] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.993010   69419 system_pods.go:89] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.993017   69419 system_pods.go:89] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.993027   69419 system_pods.go:89] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.993037   69419 system_pods.go:89] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.993047   69419 system_pods.go:126] duration metric: took 208.382518ms to wait for k8s-apps to be running ...
	I0729 11:53:45.993059   69419 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:45.993109   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:46.012248   69419 system_svc.go:56] duration metric: took 19.180103ms WaitForService to wait for kubelet
	I0729 11:53:46.012284   69419 kubeadm.go:582] duration metric: took 9.221504322s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:46.012309   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:46.186674   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:46.186723   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:46.186736   69419 node_conditions.go:105] duration metric: took 174.422508ms to run NodePressure ...
	I0729 11:53:46.186747   69419 start.go:241] waiting for startup goroutines ...
	I0729 11:53:46.186753   69419 start.go:246] waiting for cluster config update ...
	I0729 11:53:46.186763   69419 start.go:255] writing updated cluster config ...
	I0729 11:53:46.187032   69419 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:46.236558   69419 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 11:53:46.239388   69419 out.go:177] * Done! kubectl is now configured to use "no-preload-297799" cluster and "default" namespace by default
	I0729 11:54:13.072567   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:13.072841   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:54:13.072861   70480 kubeadm.go:310] 
	I0729 11:54:13.072916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:54:13.072951   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:54:13.072959   70480 kubeadm.go:310] 
	I0729 11:54:13.072987   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:54:13.073016   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:54:13.073156   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:54:13.073181   70480 kubeadm.go:310] 
	I0729 11:54:13.073289   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:54:13.073328   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:54:13.073365   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:54:13.073373   70480 kubeadm.go:310] 
	I0729 11:54:13.073475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:54:13.073562   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:54:13.073572   70480 kubeadm.go:310] 
	I0729 11:54:13.073704   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:54:13.073835   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:54:13.073941   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:54:13.074115   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:54:13.074132   70480 kubeadm.go:310] 
	I0729 11:54:13.074287   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:54:13.074407   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:54:13.074523   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 11:54:13.074665   70480 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 11:54:13.074737   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:54:13.546511   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:54:13.564193   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:54:13.576259   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:54:13.576282   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:54:13.576325   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:54:13.586785   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:54:13.586846   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:54:13.597376   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:54:13.607259   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:54:13.607330   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:54:13.619236   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.630278   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:54:13.630341   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.640574   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:54:13.650526   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:54:13.650584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:54:13.660941   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:54:13.742767   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:54:13.742852   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:54:13.911296   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:54:13.911467   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:54:13.911607   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:54:14.123645   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:54:14.126464   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:54:14.126931   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:54:14.126995   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:54:14.127067   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:54:14.127146   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:54:14.127238   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:54:14.127328   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:54:14.127424   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:54:14.127506   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:54:14.129559   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:54:14.130588   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:54:14.130792   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:54:14.130931   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:54:14.373822   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:54:14.518174   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:54:14.850815   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:54:14.955918   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:54:14.979731   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:54:14.979884   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:54:14.979950   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:54:15.134480   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:54:15.137488   70480 out.go:204]   - Booting up control plane ...
	I0729 11:54:15.137635   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:54:15.150435   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:54:15.151842   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:54:15.152652   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:54:15.155022   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:54:55.157641   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:54:55.157771   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:55.158065   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:00.158857   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:00.159153   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:10.159857   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:30.160336   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:30.160554   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:56:10.159858   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159872   70480 kubeadm.go:310] 
	I0729 11:56:10.159916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:56:10.159968   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:56:10.159976   70480 kubeadm.go:310] 
	I0729 11:56:10.160004   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:56:10.160034   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:56:10.160140   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:56:10.160160   70480 kubeadm.go:310] 
	I0729 11:56:10.160286   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:56:10.160348   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:56:10.160378   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:56:10.160384   70480 kubeadm.go:310] 
	I0729 11:56:10.160475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:56:10.160547   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:56:10.160556   70480 kubeadm.go:310] 
	I0729 11:56:10.160652   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:56:10.160756   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:56:10.160878   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:56:10.160976   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:56:10.160985   70480 kubeadm.go:310] 
	I0729 11:56:10.162126   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:56:10.162224   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:56:10.162399   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 11:56:10.162415   70480 kubeadm.go:394] duration metric: took 7m59.00013135s to StartCluster
	I0729 11:56:10.162473   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:56:10.162606   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:56:10.208183   70480 cri.go:89] found id: ""
	I0729 11:56:10.208206   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.208214   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:56:10.208219   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:56:10.208275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:56:10.256265   70480 cri.go:89] found id: ""
	I0729 11:56:10.256293   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.256304   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:56:10.256445   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:56:10.256515   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:56:10.296625   70480 cri.go:89] found id: ""
	I0729 11:56:10.296668   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.296688   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:56:10.296704   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:56:10.296791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:56:10.333771   70480 cri.go:89] found id: ""
	I0729 11:56:10.333797   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.333824   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:56:10.333830   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:56:10.333887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:56:10.371746   70480 cri.go:89] found id: ""
	I0729 11:56:10.371772   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.371782   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:56:10.371789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:56:10.371850   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:56:10.408988   70480 cri.go:89] found id: ""
	I0729 11:56:10.409018   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.409028   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:56:10.409036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:56:10.409087   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:56:10.448706   70480 cri.go:89] found id: ""
	I0729 11:56:10.448731   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.448740   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:56:10.448749   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:56:10.448808   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:56:10.485549   70480 cri.go:89] found id: ""
	I0729 11:56:10.485577   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.485585   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:56:10.485594   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:56:10.485609   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:56:10.536953   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:56:10.536989   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:56:10.554870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:56:10.554908   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:56:10.675935   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:56:10.675966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:56:10.675983   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:56:10.792820   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:56:10.792858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 11:56:10.833493   70480 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 11:56:10.833537   70480 out.go:239] * 
	W0729 11:56:10.833616   70480 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.833644   70480 out.go:239] * 
	W0729 11:56:10.834607   70480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:56:10.838465   70480 out.go:177] 
	W0729 11:56:10.840213   70480 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.840257   70480 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 11:56:10.840282   70480 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 11:56:10.841869   70480 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.290376691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254716290343334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d26a5e8-ff66-4b13-977a-c75cf75034cc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.291104908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee3e46c6-2af7-4187-ab61-3e21b8f8f1ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.291154342Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee3e46c6-2af7-4187-ab61-3e21b8f8f1ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.291194928Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ee3e46c6-2af7-4187-ab61-3e21b8f8f1ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.327196806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b81c1c32-9635-451b-9a85-1655cf3a00ec name=/runtime.v1.RuntimeService/Version
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.327278351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b81c1c32-9635-451b-9a85-1655cf3a00ec name=/runtime.v1.RuntimeService/Version
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.328615656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8dce71c2-c1db-4a3c-a328-9058ba65bc94 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.329156325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254716329129753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8dce71c2-c1db-4a3c-a328-9058ba65bc94 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.329660115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=162791c7-f94f-431d-96ed-2c64a1b99507 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.329709599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=162791c7-f94f-431d-96ed-2c64a1b99507 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.329747755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=162791c7-f94f-431d-96ed-2c64a1b99507 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.365113279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3916f88-800c-4966-b9d7-4eb2accb488f name=/runtime.v1.RuntimeService/Version
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.365222038Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3916f88-800c-4966-b9d7-4eb2accb488f name=/runtime.v1.RuntimeService/Version
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.366321423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eaf2159f-3b95-48f5-91ed-006cd3016f53 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.366848243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254716366819720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eaf2159f-3b95-48f5-91ed-006cd3016f53 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.367388435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=163375b6-b3e3-4ec6-a53a-1545ab46afaa name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.367463191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=163375b6-b3e3-4ec6-a53a-1545ab46afaa name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.367505958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=163375b6-b3e3-4ec6-a53a-1545ab46afaa name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.402902372Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb918f71-9600-487d-ab89-71d5b7c20e25 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.403075387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb918f71-9600-487d-ab89-71d5b7c20e25 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.404617492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03c49d93-f27d-49d8-83d1-80ab17bf89eb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.405147204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254716405116586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03c49d93-f27d-49d8-83d1-80ab17bf89eb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.405700591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d501a7a-446e-4d32-af0b-8a20728844bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.405775089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d501a7a-446e-4d32-af0b-8a20728844bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:05:16 old-k8s-version-188043 crio[643]: time="2024-07-29 12:05:16.405812853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3d501a7a-446e-4d32-af0b-8a20728844bf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051118] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040668] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.021934] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.587255] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.658853] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul29 11:48] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.065705] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.081948] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.207768] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.125104] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.281042] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +6.791991] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.065131] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.421882] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[ +12.167503] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 11:52] systemd-fstab-generator[5012]: Ignoring "noauto" option for root device
	[Jul29 11:54] systemd-fstab-generator[5288]: Ignoring "noauto" option for root device
	[  +0.063650] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:05:16 up 17 min,  0 users,  load average: 0.02, 0.02, 0.02
	Linux old-k8s-version-188043 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]: net/http.(*Transport).dialConnFor(0xc0006c2000, 0xc000a018c0)
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]: created by net/http.(*Transport).queueForDial
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]: goroutine 167 [select]:
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c2ac00, 0xc000e7e180, 0xc00057ecc0, 0xc00057ec60)
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]: created by net.(*netFD).connect
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]: goroutine 168 [select]:
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c2a780, 0xc000e7e280, 0xc00057ed80, 0xc00057ed20)
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]: created by net.(*netFD).connect
	Jul 29 12:05:11 old-k8s-version-188043 kubelet[6472]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Jul 29 12:05:11 old-k8s-version-188043 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 12:05:11 old-k8s-version-188043 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 12:05:12 old-k8s-version-188043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 29 12:05:12 old-k8s-version-188043 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 12:05:12 old-k8s-version-188043 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 12:05:12 old-k8s-version-188043 kubelet[6480]: I0729 12:05:12.171814    6480 server.go:416] Version: v1.20.0
	Jul 29 12:05:12 old-k8s-version-188043 kubelet[6480]: I0729 12:05:12.172191    6480 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 12:05:12 old-k8s-version-188043 kubelet[6480]: I0729 12:05:12.174303    6480 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 12:05:12 old-k8s-version-188043 kubelet[6480]: I0729 12:05:12.175497    6480 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 29 12:05:12 old-k8s-version-188043 kubelet[6480]: W0729 12:05:12.175611    6480 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188043 -n old-k8s-version-188043
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 2 (249.685075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-188043" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (428.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-731235 -n embed-certs-731235
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 12:08:54.169478338 +0000 UTC m=+6512.253206824
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-731235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-731235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.644µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-731235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-731235 -n embed-certs-731235
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-731235 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-731235 logs -n 25: (1.37515563s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-574387 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | disable-driver-mounts-574387                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:41 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-297799             | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-297799                                   | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-731235            | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC | 29 Jul 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-754486  | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC | 29 Jul 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-297799                  | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-297799 --memory=2200                     | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC | 29 Jul 24 11:53 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-188043        | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-731235                 | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC | 29 Jul 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754486       | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:53 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-188043             | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	| start   | -p newest-cni-485099 --memory=2200 --alsologtostderr   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:08 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-297799                                   | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	| addons  | enable metrics-server -p newest-cni-485099             | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-485099                                   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-485099                  | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-485099 --memory=2200 --alsologtostderr   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:08:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:08:45.284723   77368 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:08:45.284977   77368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:08:45.284986   77368 out.go:304] Setting ErrFile to fd 2...
	I0729 12:08:45.284990   77368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:08:45.285193   77368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 12:08:45.285702   77368 out.go:298] Setting JSON to false
	I0729 12:08:45.286733   77368 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6671,"bootTime":1722248254,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:08:45.286801   77368 start.go:139] virtualization: kvm guest
	I0729 12:08:45.289076   77368 out.go:177] * [newest-cni-485099] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:08:45.291080   77368 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 12:08:45.291105   77368 notify.go:220] Checking for updates...
	I0729 12:08:45.293720   77368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:08:45.295140   77368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 12:08:45.296482   77368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 12:08:45.297799   77368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:08:45.299345   77368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:08:45.301344   77368 config.go:182] Loaded profile config "newest-cni-485099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:08:45.301975   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:08:45.302069   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:08:45.317930   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I0729 12:08:45.318302   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:08:45.318856   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:08:45.318883   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:08:45.319229   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:08:45.319428   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:45.319652   77368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:08:45.319948   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:08:45.319983   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:08:45.335918   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I0729 12:08:45.336296   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:08:45.336817   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:08:45.336851   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:08:45.337178   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:08:45.337391   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:45.375520   77368 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:08:45.376873   77368 start.go:297] selected driver: kvm2
	I0729 12:08:45.376890   77368 start.go:901] validating driver "kvm2" against &{Name:newest-cni-485099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-485099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:08:45.377010   77368 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:08:45.377678   77368 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:08:45.377752   77368 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:08:45.392757   77368 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:08:45.393129   77368 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 12:08:45.393156   77368 cni.go:84] Creating CNI manager for ""
	I0729 12:08:45.393165   77368 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:08:45.393215   77368 start.go:340] cluster config:
	{Name:newest-cni-485099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-485099 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:08:45.393341   77368 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:08:45.395412   77368 out.go:177] * Starting "newest-cni-485099" primary control-plane node in "newest-cni-485099" cluster
	I0729 12:08:45.396665   77368 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 12:08:45.396699   77368 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 12:08:45.396707   77368 cache.go:56] Caching tarball of preloaded images
	I0729 12:08:45.396793   77368 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:08:45.396805   77368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 12:08:45.396936   77368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/config.json ...
	I0729 12:08:45.397157   77368 start.go:360] acquireMachinesLock for newest-cni-485099: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:08:45.397201   77368 start.go:364] duration metric: took 23.441µs to acquireMachinesLock for "newest-cni-485099"
	I0729 12:08:45.397220   77368 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:08:45.397229   77368 fix.go:54] fixHost starting: 
	I0729 12:08:45.397485   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:08:45.397518   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:08:45.412393   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37075
	I0729 12:08:45.412861   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:08:45.413416   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:08:45.413440   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:08:45.413751   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:08:45.413952   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:45.414122   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetState
	I0729 12:08:45.415819   77368 fix.go:112] recreateIfNeeded on newest-cni-485099: state=Stopped err=<nil>
	I0729 12:08:45.415858   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	W0729 12:08:45.416012   77368 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:08:45.417975   77368 out.go:177] * Restarting existing kvm2 VM for "newest-cni-485099" ...
	I0729 12:08:45.419204   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Start
	I0729 12:08:45.419359   77368 main.go:141] libmachine: (newest-cni-485099) Ensuring networks are active...
	I0729 12:08:45.420082   77368 main.go:141] libmachine: (newest-cni-485099) Ensuring network default is active
	I0729 12:08:45.420475   77368 main.go:141] libmachine: (newest-cni-485099) Ensuring network mk-newest-cni-485099 is active
	I0729 12:08:45.420889   77368 main.go:141] libmachine: (newest-cni-485099) Getting domain xml...
	I0729 12:08:45.421732   77368 main.go:141] libmachine: (newest-cni-485099) Creating domain...
	I0729 12:08:46.673144   77368 main.go:141] libmachine: (newest-cni-485099) Waiting to get IP...
	I0729 12:08:46.673953   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:46.674423   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:46.674514   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:46.674396   77402 retry.go:31] will retry after 306.309487ms: waiting for machine to come up
	I0729 12:08:46.982000   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:46.982543   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:46.982572   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:46.982502   77402 retry.go:31] will retry after 289.977251ms: waiting for machine to come up
	I0729 12:08:47.273912   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:47.274296   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:47.274329   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:47.274230   77402 retry.go:31] will retry after 464.597308ms: waiting for machine to come up
	I0729 12:08:47.740807   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:47.741277   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:47.741304   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:47.741225   77402 retry.go:31] will retry after 457.661408ms: waiting for machine to come up
	I0729 12:08:48.200839   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:48.201376   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:48.201397   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:48.201328   77402 retry.go:31] will retry after 519.551439ms: waiting for machine to come up
	I0729 12:08:48.721993   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:48.722488   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:48.722514   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:48.722465   77402 retry.go:31] will retry after 796.269012ms: waiting for machine to come up
	I0729 12:08:49.519762   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:49.520232   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:49.520271   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:49.520194   77402 retry.go:31] will retry after 791.553851ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.885168607Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254934885143406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bac6e688-e399-4e7a-8d4d-a2d7705df281 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.885874299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c1a2b5b-056d-4d4b-91af-10aa82600b14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.885941994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c1a2b5b-056d-4d4b-91af-10aa82600b14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.886142544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f,PodSandboxId:a84509d4ea22ed5052fe1d0bba21198a0058ca70263b502a33856a0dd6a871cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253960489012354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fea7bc2-554e-4fe9-b2af-c4e340e85c18,},Annotations:map[string]string{io.kubernetes.container.hash: 173634fd,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83,PodSandboxId:1719ec1d1a9f3e4d89657e2f7cd2a822b03afa2bf27b917efa9994e232ea4c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959937360199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298c2d3b-8a1e-4146-987a-f9c1eff6f92c,},Annotations:map[string]string{io.kubernetes.container.hash: f9f333a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a,PodSandboxId:4a0ff83e61ae46c9e12c94969c7c7c03847e3790183c83520d3a23444ca49dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959819109768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6md2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
7472eb3-a941-4ff9-a0af-0ce42d604318,},Annotations:map[string]string{io.kubernetes.container.hash: a22b6753,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f,PodSandboxId:0ab73f6229550e0624eac56c0de64cc1ea26d20044a58cc3bc0d8d82b22a47ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722253959304098231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ch48n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4ae37b05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b,PodSandboxId:8a11380193b6bfbab850dd2c4d95a9fd38b39362da0cf4dc2271399a521eda55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253939401272825,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302d27b116b4d52c090d34d6a9d4555a,},Annotations:map[string]string{io.kubernetes.container.hash: 30da0c39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3,PodSandboxId:abd79b4ec7ce8dd784f4ffe888a97f29d97a7f6af02e6573dbd1c303094bed64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253939368763724,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ae99724a0457d2e75a03486422f3aa2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc,PodSandboxId:ba91a3458c7d79bc2e997337bf9878eff517e736e83bc3da27dd3497fd244534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253939305995266,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88,PodSandboxId:214b0c6b2009c39d134d523a138c8bf5b7bc9e5f4cd418999feeceeac32901c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253939281984009,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60b667741fe404f7fea63d7874436bf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8739168a3bbb166d3dc6156e969baf4e9e5e7be3c280dd20d80ba9b9ae71e2e4,PodSandboxId:abdd90a62be0421ae8a6ac0003705ed4ca87415a46b066f821c119287420051d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253651263271287,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c1a2b5b-056d-4d4b-91af-10aa82600b14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.896088825Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85298e9f-d018-4ae9-b7af-7a7748a84670 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.896415624Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a84509d4ea22ed5052fe1d0bba21198a0058ca70263b502a33856a0dd6a871cc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2fea7bc2-554e-4fe9-b2af-c4e340e85c18,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253960341759366,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fea7bc2-554e-4fe9-b2af-c4e340e85c18,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T11:52:40.029665085Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2ee17281a25eab2366832a3ae6b98fe9418663be4ebf88a3e8dd6d6c2b0e82c,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-gxczz,Uid:096f1de4-e064-42bc-8a16-aa08320addb4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253960106214580,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-gxczz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096f1de4-e064-42bc-8a16-aa08320addb
4,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:52:39.781785919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0ab73f6229550e0624eac56c0de64cc1ea26d20044a58cc3bc0d8d82b22a47ba,Metadata:&PodSandboxMetadata{Name:kube-proxy-ch48n,Uid:68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253958947608297,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ch48n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:52:38.038283920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a0ff83e61ae46c9e12c94969c7c7c03847e3790183c83520d3a23444ca49dc6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-6md2j,Ui
d:37472eb3-a941-4ff9-a0af-0ce42d604318,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253958827592749,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-6md2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37472eb3-a941-4ff9-a0af-0ce42d604318,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:52:38.515118729Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1719ec1d1a9f3e4d89657e2f7cd2a822b03afa2bf27b917efa9994e232ea4c93,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rlhzt,Uid:298c2d3b-8a1e-4146-987a-f9c1eff6f92c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253958810653168,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298c2d3b-8a1e-4146-987a-f9c1eff6f92c,k8s-app: kube-dns,pod-templa
te-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:52:38.488537858Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a11380193b6bfbab850dd2c4d95a9fd38b39362da0cf4dc2271399a521eda55,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-731235,Uid:302d27b116b4d52c090d34d6a9d4555a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253939117417354,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302d27b116b4d52c090d34d6a9d4555a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.202:2379,kubernetes.io/config.hash: 302d27b116b4d52c090d34d6a9d4555a,kubernetes.io/config.seen: 2024-07-29T11:52:18.644036174Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ba91a3458c7d79bc2e997337bf9878eff517e736e83bc3da27dd3497fd244534,
Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-731235,Uid:a60b2b0d2997fb059777c19017f4b354,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253939115526546,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.202:8443,kubernetes.io/config.hash: a60b2b0d2997fb059777c19017f4b354,kubernetes.io/config.seen: 2024-07-29T11:52:18.644037640Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:214b0c6b2009c39d134d523a138c8bf5b7bc9e5f4cd418999feeceeac32901c9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-731235,Uid:f60b667741fe404f7fea63d7874436bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253939083300491,Labels:m
ap[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60b667741fe404f7fea63d7874436bf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f60b667741fe404f7fea63d7874436bf,kubernetes.io/config.seen: 2024-07-29T11:52:18.644030887Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:abd79b4ec7ce8dd784f4ffe888a97f29d97a7f6af02e6573dbd1c303094bed64,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-731235,Uid:8ae99724a0457d2e75a03486422f3aa2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722253939081619395,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ae99724a0457d2e75a03486422f3aa2,tier: control-plane,},Annotations:map[str
ing]string{kubernetes.io/config.hash: 8ae99724a0457d2e75a03486422f3aa2,kubernetes.io/config.seen: 2024-07-29T11:52:18.644034868Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=85298e9f-d018-4ae9-b7af-7a7748a84670 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.897472647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01b70a9b-f034-4c74-bf83-ed16ffc9520b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.897589663Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01b70a9b-f034-4c74-bf83-ed16ffc9520b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.898007905Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f,PodSandboxId:a84509d4ea22ed5052fe1d0bba21198a0058ca70263b502a33856a0dd6a871cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253960489012354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fea7bc2-554e-4fe9-b2af-c4e340e85c18,},Annotations:map[string]string{io.kubernetes.container.hash: 173634fd,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83,PodSandboxId:1719ec1d1a9f3e4d89657e2f7cd2a822b03afa2bf27b917efa9994e232ea4c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959937360199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298c2d3b-8a1e-4146-987a-f9c1eff6f92c,},Annotations:map[string]string{io.kubernetes.container.hash: f9f333a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a,PodSandboxId:4a0ff83e61ae46c9e12c94969c7c7c03847e3790183c83520d3a23444ca49dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959819109768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6md2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
7472eb3-a941-4ff9-a0af-0ce42d604318,},Annotations:map[string]string{io.kubernetes.container.hash: a22b6753,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f,PodSandboxId:0ab73f6229550e0624eac56c0de64cc1ea26d20044a58cc3bc0d8d82b22a47ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722253959304098231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ch48n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4ae37b05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b,PodSandboxId:8a11380193b6bfbab850dd2c4d95a9fd38b39362da0cf4dc2271399a521eda55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253939401272825,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302d27b116b4d52c090d34d6a9d4555a,},Annotations:map[string]string{io.kubernetes.container.hash: 30da0c39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3,PodSandboxId:abd79b4ec7ce8dd784f4ffe888a97f29d97a7f6af02e6573dbd1c303094bed64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253939368763724,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ae99724a0457d2e75a03486422f3aa2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc,PodSandboxId:ba91a3458c7d79bc2e997337bf9878eff517e736e83bc3da27dd3497fd244534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253939305995266,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88,PodSandboxId:214b0c6b2009c39d134d523a138c8bf5b7bc9e5f4cd418999feeceeac32901c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253939281984009,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60b667741fe404f7fea63d7874436bf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01b70a9b-f034-4c74-bf83-ed16ffc9520b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.934306988Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0eb7a5cb-0e6e-4789-9416-e3750a5878a5 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.934456980Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0eb7a5cb-0e6e-4789-9416-e3750a5878a5 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.937159379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=adf6e161-b62a-4640-b9be-a76711002cd9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.937728736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254934937691850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adf6e161-b62a-4640-b9be-a76711002cd9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.938419336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1533187-8b1d-409b-89fc-0cf34fbe188a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.938516655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1533187-8b1d-409b-89fc-0cf34fbe188a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.938793085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f,PodSandboxId:a84509d4ea22ed5052fe1d0bba21198a0058ca70263b502a33856a0dd6a871cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253960489012354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fea7bc2-554e-4fe9-b2af-c4e340e85c18,},Annotations:map[string]string{io.kubernetes.container.hash: 173634fd,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83,PodSandboxId:1719ec1d1a9f3e4d89657e2f7cd2a822b03afa2bf27b917efa9994e232ea4c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959937360199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298c2d3b-8a1e-4146-987a-f9c1eff6f92c,},Annotations:map[string]string{io.kubernetes.container.hash: f9f333a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a,PodSandboxId:4a0ff83e61ae46c9e12c94969c7c7c03847e3790183c83520d3a23444ca49dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959819109768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6md2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
7472eb3-a941-4ff9-a0af-0ce42d604318,},Annotations:map[string]string{io.kubernetes.container.hash: a22b6753,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f,PodSandboxId:0ab73f6229550e0624eac56c0de64cc1ea26d20044a58cc3bc0d8d82b22a47ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722253959304098231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ch48n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4ae37b05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b,PodSandboxId:8a11380193b6bfbab850dd2c4d95a9fd38b39362da0cf4dc2271399a521eda55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253939401272825,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302d27b116b4d52c090d34d6a9d4555a,},Annotations:map[string]string{io.kubernetes.container.hash: 30da0c39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3,PodSandboxId:abd79b4ec7ce8dd784f4ffe888a97f29d97a7f6af02e6573dbd1c303094bed64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253939368763724,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ae99724a0457d2e75a03486422f3aa2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc,PodSandboxId:ba91a3458c7d79bc2e997337bf9878eff517e736e83bc3da27dd3497fd244534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253939305995266,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88,PodSandboxId:214b0c6b2009c39d134d523a138c8bf5b7bc9e5f4cd418999feeceeac32901c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253939281984009,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60b667741fe404f7fea63d7874436bf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8739168a3bbb166d3dc6156e969baf4e9e5e7be3c280dd20d80ba9b9ae71e2e4,PodSandboxId:abdd90a62be0421ae8a6ac0003705ed4ca87415a46b066f821c119287420051d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253651263271287,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1533187-8b1d-409b-89fc-0cf34fbe188a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.955058821Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c2431d24-5333-43cc-bb5f-0d2eb533b8d1 name=/runtime.v1.ImageService/ListImages
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.955640048Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315],Size_:117609954,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7 registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e],Size_:112198984,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,}
,&Image{Id:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266 registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4],Size_:63051080,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,RepoTags:[registry.k8s.io/kube-proxy:v1.30.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80 registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65],Size_:85953945,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a
21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,RepoTags:[docker.io/kindest/kindnetd:v20240715-585640e9],RepoDigests:[docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115 docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493],Size_:87165492,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gc
r.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=c2431d24-5333-43cc-bb5f-0d2eb533b8d1 name=/runtime.v1.ImageService/ListImages
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.981907548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62774555-473f-4201-9d3b-601ed4b57a47 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.982036241Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62774555-473f-4201-9d3b-601ed4b57a47 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.983931597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02a1030b-b65a-4cb9-8c2c-62caea64fcfa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.984476048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254934984442911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02a1030b-b65a-4cb9-8c2c-62caea64fcfa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.985314694Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86eb96a3-8bc3-4b3f-8bc3-27eb7f573a49 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.985407779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86eb96a3-8bc3-4b3f-8bc3-27eb7f573a49 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:54 embed-certs-731235 crio[720]: time="2024-07-29 12:08:54.985689089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f,PodSandboxId:a84509d4ea22ed5052fe1d0bba21198a0058ca70263b502a33856a0dd6a871cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253960489012354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fea7bc2-554e-4fe9-b2af-c4e340e85c18,},Annotations:map[string]string{io.kubernetes.container.hash: 173634fd,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83,PodSandboxId:1719ec1d1a9f3e4d89657e2f7cd2a822b03afa2bf27b917efa9994e232ea4c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959937360199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298c2d3b-8a1e-4146-987a-f9c1eff6f92c,},Annotations:map[string]string{io.kubernetes.container.hash: f9f333a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a,PodSandboxId:4a0ff83e61ae46c9e12c94969c7c7c03847e3790183c83520d3a23444ca49dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253959819109768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6md2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
7472eb3-a941-4ff9-a0af-0ce42d604318,},Annotations:map[string]string{io.kubernetes.container.hash: a22b6753,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f,PodSandboxId:0ab73f6229550e0624eac56c0de64cc1ea26d20044a58cc3bc0d8d82b22a47ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722253959304098231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ch48n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68896b36-6aa0-4dcc-ad3a-74573aa1c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 4ae37b05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b,PodSandboxId:8a11380193b6bfbab850dd2c4d95a9fd38b39362da0cf4dc2271399a521eda55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253939401272825,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302d27b116b4d52c090d34d6a9d4555a,},Annotations:map[string]string{io.kubernetes.container.hash: 30da0c39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3,PodSandboxId:abd79b4ec7ce8dd784f4ffe888a97f29d97a7f6af02e6573dbd1c303094bed64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253939368763724,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ae99724a0457d2e75a03486422f3aa2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc,PodSandboxId:ba91a3458c7d79bc2e997337bf9878eff517e736e83bc3da27dd3497fd244534,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253939305995266,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88,PodSandboxId:214b0c6b2009c39d134d523a138c8bf5b7bc9e5f4cd418999feeceeac32901c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253939281984009,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60b667741fe404f7fea63d7874436bf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8739168a3bbb166d3dc6156e969baf4e9e5e7be3c280dd20d80ba9b9ae71e2e4,PodSandboxId:abdd90a62be0421ae8a6ac0003705ed4ca87415a46b066f821c119287420051d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253651263271287,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-731235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b2b0d2997fb059777c19017f4b354,},Annotations:map[string]string{io.kubernetes.container.hash: 674703b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86eb96a3-8bc3-4b3f-8bc3-27eb7f573a49 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17ed1f9cdc5c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   a84509d4ea22e       storage-provisioner
	c504bd9a6517f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   1719ec1d1a9f3       coredns-7db6d8ff4d-rlhzt
	f159ded4e861d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   4a0ff83e61ae4       coredns-7db6d8ff4d-6md2j
	540f29562a87f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   0ab73f6229550       kube-proxy-ch48n
	292332f55fd85       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   8a11380193b6b       etcd-embed-certs-731235
	4bac7e946a3aa       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   16 minutes ago      Running             kube-scheduler            2                   abd79b4ec7ce8       kube-scheduler-embed-certs-731235
	f60dbe60770ee       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   16 minutes ago      Running             kube-apiserver            2                   ba91a3458c7d7       kube-apiserver-embed-certs-731235
	afdc1f5fc4c43       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   16 minutes ago      Running             kube-controller-manager   2                   214b0c6b2009c       kube-controller-manager-embed-certs-731235
	8739168a3bbb1       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 minutes ago      Exited              kube-apiserver            1                   abdd90a62be04       kube-apiserver-embed-certs-731235
	
	
	==> coredns [c504bd9a6517feb8d338909c101439d635b36ff9148d1ae8f5b327b8e7623d83] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f159ded4e861dee9c8c1ff6dff8ccf557ddce1f464ae642d414a71ca0cd4171a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-731235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-731235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=embed-certs-731235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_52_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:52:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-731235
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:08:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:08:02 +0000   Mon, 29 Jul 2024 11:52:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:08:02 +0000   Mon, 29 Jul 2024 11:52:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:08:02 +0000   Mon, 29 Jul 2024 11:52:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:08:02 +0000   Mon, 29 Jul 2024 11:52:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.202
	  Hostname:    embed-certs-731235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e72225a70aa443afbe796c8a6ba51195
	  System UUID:                e72225a7-0aa4-43af-be79-6c8a6ba51195
	  Boot ID:                    f81e00dd-ec80-4e0e-b189-1c01131c4473
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-6md2j                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-rlhzt                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-731235                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-731235             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-731235    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-ch48n                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-731235             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-gxczz               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-731235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-731235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-731235 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-731235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-731235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-731235 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-731235 event: Registered Node embed-certs-731235 in Controller
	
	
	==> dmesg <==
	[  +0.040514] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.823952] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.675782] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.576972] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.031640] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.055931] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065827] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.199495] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.130976] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.320203] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +4.450055] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.057915] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.589649] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +4.592807] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.320400] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.745588] kauditd_printk_skb: 27 callbacks suppressed
	[Jul29 11:52] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.681486] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +4.710216] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.869378] systemd-fstab-generator[3877]: Ignoring "noauto" option for root device
	[ +13.882153] systemd-fstab-generator[4070]: Ignoring "noauto" option for root device
	[  +0.116175] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 11:53] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [292332f55fd85c2812f7e36eccd39f48da2813674e2b84477769bcd1426a0b8b] <==
	{"level":"info","ts":"2024-07-29T11:52:19.769185Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T11:52:19.80091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T11:52:19.800965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T11:52:19.801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 received MsgPreVoteResp from db33251a0b9c6fb3 at term 1"}
	{"level":"info","ts":"2024-07-29T11:52:19.801014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:19.801019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 received MsgVoteResp from db33251a0b9c6fb3 at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:19.801027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:19.801039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db33251a0b9c6fb3 elected leader db33251a0b9c6fb3 at term 2"}
	{"level":"info","ts":"2024-07-29T11:52:19.80514Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"db33251a0b9c6fb3","local-member-attributes":"{Name:embed-certs-731235 ClientURLs:[https://192.168.61.202:2379]}","request-path":"/0/members/db33251a0b9c6fb3/attributes","cluster-id":"834577a0a9e3ba88","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:52:19.80519Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:52:19.805583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:52:19.811001Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:19.81691Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:52:19.816948Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:52:19.816987Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"834577a0a9e3ba88","local-member-id":"db33251a0b9c6fb3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:19.817045Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:19.817063Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:19.821478Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.202:2379"}
	{"level":"info","ts":"2024-07-29T11:52:19.826422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T12:02:20.312816Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2024-07-29T12:02:20.321782Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":715,"took":"8.106933ms","hash":1252182436,"current-db-size-bytes":2187264,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2187264,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-29T12:02:20.321943Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1252182436,"revision":715,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T12:07:20.320298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":958}
	{"level":"info","ts":"2024-07-29T12:07:20.335177Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":958,"took":"12.643776ms","hash":3754419723,"current-db-size-bytes":2187264,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1597440,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T12:07:20.335316Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3754419723,"revision":958,"compact-revision":715}
	
	
	==> kernel <==
	 12:08:55 up 21 min,  0 users,  load average: 0.30, 0.24, 0.19
	Linux embed-certs-731235 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8739168a3bbb166d3dc6156e969baf4e9e5e7be3c280dd20d80ba9b9ae71e2e4] <==
	W0729 11:52:11.532156       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.542040       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.543515       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.545819       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.557092       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.558570       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.577264       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.605779       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.628984       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.629063       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.629323       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.637316       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.640044       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.800628       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.812038       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.851661       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:11.892598       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.111441       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.144258       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.189769       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.194611       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.293623       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.302989       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:12.620188       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:52:15.221367       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f60dbe60770ee2d545a8c9c63f32cf05b6a999efc5df873c92be530cb928ccbc] <==
	I0729 12:03:23.129360       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:05:23.129027       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:05:23.129113       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 12:05:23.129121       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:05:23.130334       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:05:23.130447       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 12:05:23.130462       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:07:22.135163       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:07:22.135521       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 12:07:23.135736       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:07:23.135921       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 12:07:23.135963       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:07:23.136075       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:07:23.136156       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 12:07:23.137397       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:08:23.136187       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:08:23.136247       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 12:08:23.136256       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:08:23.138591       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:08:23.138653       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 12:08:23.138660       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [afdc1f5fc4c437ebceca6f54f61afa1367a5d26c144fb5e252036ba116e26f88] <==
	E0729 12:03:37.630080       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:03:38.224192       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 12:03:38.915565       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="97.988µs"
	E0729 12:04:07.635566       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:04:08.231491       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:04:37.640799       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:04:38.241817       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:05:07.646191       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:05:08.256019       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:05:37.652020       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:05:38.263892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:06:07.656753       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:06:08.272958       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:06:37.662649       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:06:38.283022       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:07:07.669573       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:07:08.293472       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:07:37.676169       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:07:38.304666       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:08:07.681353       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:08:08.312742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 12:08:36.915164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="447.718µs"
	E0729 12:08:37.689113       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:08:38.321094       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 12:08:51.910405       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="117.862µs"
	
	
	==> kube-proxy [540f29562a87f0dd1ec7920990c414230ab407470ce6343a05c69737b1f9f42f] <==
	I0729 11:52:40.068334       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:52:40.163310       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.202"]
	I0729 11:52:40.406132       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:52:40.406229       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:52:40.406262       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:52:40.414638       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:52:40.414896       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:52:40.414930       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:52:40.417131       1 config.go:192] "Starting service config controller"
	I0729 11:52:40.417227       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:52:40.417324       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:52:40.417372       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:52:40.418379       1 config.go:319] "Starting node config controller"
	I0729 11:52:40.418480       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:52:40.518060       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:52:40.518135       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:52:40.518593       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4bac7e946a3aa6f4a84aadfef14d8fcde56ca7755021ddbe7765bc82f34081e3] <==
	W0729 11:52:22.149427       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:52:22.149455       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:52:22.150970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:52:22.151070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 11:52:22.151284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:52:22.151384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:52:22.151610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:52:22.154028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 11:52:22.154243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:52:22.154336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 11:52:23.083013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:23.083062       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:23.101215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:23.101785       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:23.207733       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:52:23.208136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:52:23.235154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:52:23.235207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 11:52:23.295143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:52:23.295296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:52:23.310620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:23.310744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:23.380757       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:52:23.380994       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 11:52:25.733530       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:06:26 embed-certs-731235 kubelet[3884]: E0729 12:06:26.895601    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:06:41 embed-certs-731235 kubelet[3884]: E0729 12:06:41.897222    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:06:54 embed-certs-731235 kubelet[3884]: E0729 12:06:54.895979    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:07:08 embed-certs-731235 kubelet[3884]: E0729 12:07:08.896245    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:07:20 embed-certs-731235 kubelet[3884]: E0729 12:07:20.895013    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:07:24 embed-certs-731235 kubelet[3884]: E0729 12:07:24.930251    3884 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:07:24 embed-certs-731235 kubelet[3884]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:07:24 embed-certs-731235 kubelet[3884]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:07:24 embed-certs-731235 kubelet[3884]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:07:24 embed-certs-731235 kubelet[3884]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:07:34 embed-certs-731235 kubelet[3884]: E0729 12:07:34.897366    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:07:46 embed-certs-731235 kubelet[3884]: E0729 12:07:46.894763    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:07:57 embed-certs-731235 kubelet[3884]: E0729 12:07:57.895646    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:08:12 embed-certs-731235 kubelet[3884]: E0729 12:08:12.897957    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:08:24 embed-certs-731235 kubelet[3884]: E0729 12:08:24.910165    3884 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 12:08:24 embed-certs-731235 kubelet[3884]: E0729 12:08:24.910244    3884 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 12:08:24 embed-certs-731235 kubelet[3884]: E0729 12:08:24.910459    3884 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46jv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recur
siveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:fals
e,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-gxczz_kube-system(096f1de4-e064-42bc-8a16-aa08320addb4): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 29 12:08:24 embed-certs-731235 kubelet[3884]: E0729 12:08:24.910496    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:08:24 embed-certs-731235 kubelet[3884]: E0729 12:08:24.929805    3884 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:08:24 embed-certs-731235 kubelet[3884]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:08:24 embed-certs-731235 kubelet[3884]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:08:24 embed-certs-731235 kubelet[3884]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:08:24 embed-certs-731235 kubelet[3884]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:08:36 embed-certs-731235 kubelet[3884]: E0729 12:08:36.897288    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	Jul 29 12:08:51 embed-certs-731235 kubelet[3884]: E0729 12:08:51.895159    3884 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gxczz" podUID="096f1de4-e064-42bc-8a16-aa08320addb4"
	
	
	==> storage-provisioner [17ed1f9cdc5c75984fac0b852f2171e81bdec3faebe83a70fb398ac825fd181f] <==
	I0729 11:52:40.592955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:52:40.603424       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:52:40.603709       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:52:40.616496       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:52:40.617159       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-731235_3681e83f-dde9-4a42-b5d3-b716207010a5!
	I0729 11:52:40.617525       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a277a6e-a739-4e1c-bf40-40fb6d89633b", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-731235_3681e83f-dde9-4a42-b5d3-b716207010a5 became leader
	I0729 11:52:40.718096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-731235_3681e83f-dde9-4a42-b5d3-b716207010a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-731235 -n embed-certs-731235
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-731235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-gxczz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-731235 describe pod metrics-server-569cc877fc-gxczz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-731235 describe pod metrics-server-569cc877fc-gxczz: exit status 1 (68.885395ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-gxczz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-731235 describe pod metrics-server-569cc877fc-gxczz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (428.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (465.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 12:09:51.90920344 +0000 UTC m=+6569.992931914
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-754486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-754486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.183µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-754486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-754486 logs -n 25
E0729 12:09:52.862473   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-754486 logs -n 25: (1.165322834s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-754486  | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC | 29 Jul 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-297799                  | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-297799 --memory=2200                     | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC | 29 Jul 24 11:53 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-188043        | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-731235                 | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC | 29 Jul 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754486       | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:53 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-188043             | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	| start   | -p newest-cni-485099 --memory=2200 --alsologtostderr   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:08 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-297799                                   | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	| addons  | enable metrics-server -p newest-cni-485099             | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-485099                                   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-485099                  | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-485099 --memory=2200 --alsologtostderr   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:09 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 12:08 UTC | 29 Jul 24 12:08 UTC |
	| image   | newest-cni-485099 image list                           | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:09 UTC | 29 Jul 24 12:09 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-485099                                   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:09 UTC | 29 Jul 24 12:09 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-485099                                   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:09 UTC | 29 Jul 24 12:09 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-485099                                   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:09 UTC | 29 Jul 24 12:09 UTC |
	| delete  | -p newest-cni-485099                                   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:09 UTC | 29 Jul 24 12:09 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:08:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:08:45.284723   77368 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:08:45.284977   77368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:08:45.284986   77368 out.go:304] Setting ErrFile to fd 2...
	I0729 12:08:45.284990   77368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:08:45.285193   77368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 12:08:45.285702   77368 out.go:298] Setting JSON to false
	I0729 12:08:45.286733   77368 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6671,"bootTime":1722248254,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:08:45.286801   77368 start.go:139] virtualization: kvm guest
	I0729 12:08:45.289076   77368 out.go:177] * [newest-cni-485099] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:08:45.291080   77368 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 12:08:45.291105   77368 notify.go:220] Checking for updates...
	I0729 12:08:45.293720   77368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:08:45.295140   77368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 12:08:45.296482   77368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 12:08:45.297799   77368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:08:45.299345   77368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:08:45.301344   77368 config.go:182] Loaded profile config "newest-cni-485099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:08:45.301975   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:08:45.302069   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:08:45.317930   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I0729 12:08:45.318302   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:08:45.318856   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:08:45.318883   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:08:45.319229   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:08:45.319428   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:45.319652   77368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:08:45.319948   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:08:45.319983   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:08:45.335918   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I0729 12:08:45.336296   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:08:45.336817   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:08:45.336851   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:08:45.337178   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:08:45.337391   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:45.375520   77368 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:08:45.376873   77368 start.go:297] selected driver: kvm2
	I0729 12:08:45.376890   77368 start.go:901] validating driver "kvm2" against &{Name:newest-cni-485099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-485099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:08:45.377010   77368 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:08:45.377678   77368 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:08:45.377752   77368 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:08:45.392757   77368 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:08:45.393129   77368 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 12:08:45.393156   77368 cni.go:84] Creating CNI manager for ""
	I0729 12:08:45.393165   77368 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:08:45.393215   77368 start.go:340] cluster config:
	{Name:newest-cni-485099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-485099 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:08:45.393341   77368 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:08:45.395412   77368 out.go:177] * Starting "newest-cni-485099" primary control-plane node in "newest-cni-485099" cluster
	I0729 12:08:45.396665   77368 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 12:08:45.396699   77368 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 12:08:45.396707   77368 cache.go:56] Caching tarball of preloaded images
	I0729 12:08:45.396793   77368 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:08:45.396805   77368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 12:08:45.396936   77368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/config.json ...
	I0729 12:08:45.397157   77368 start.go:360] acquireMachinesLock for newest-cni-485099: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:08:45.397201   77368 start.go:364] duration metric: took 23.441µs to acquireMachinesLock for "newest-cni-485099"
	I0729 12:08:45.397220   77368 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:08:45.397229   77368 fix.go:54] fixHost starting: 
	I0729 12:08:45.397485   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:08:45.397518   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:08:45.412393   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37075
	I0729 12:08:45.412861   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:08:45.413416   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:08:45.413440   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:08:45.413751   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:08:45.413952   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:45.414122   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetState
	I0729 12:08:45.415819   77368 fix.go:112] recreateIfNeeded on newest-cni-485099: state=Stopped err=<nil>
	I0729 12:08:45.415858   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	W0729 12:08:45.416012   77368 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:08:45.417975   77368 out.go:177] * Restarting existing kvm2 VM for "newest-cni-485099" ...
	I0729 12:08:45.419204   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Start
	I0729 12:08:45.419359   77368 main.go:141] libmachine: (newest-cni-485099) Ensuring networks are active...
	I0729 12:08:45.420082   77368 main.go:141] libmachine: (newest-cni-485099) Ensuring network default is active
	I0729 12:08:45.420475   77368 main.go:141] libmachine: (newest-cni-485099) Ensuring network mk-newest-cni-485099 is active
	I0729 12:08:45.420889   77368 main.go:141] libmachine: (newest-cni-485099) Getting domain xml...
	I0729 12:08:45.421732   77368 main.go:141] libmachine: (newest-cni-485099) Creating domain...
	I0729 12:08:46.673144   77368 main.go:141] libmachine: (newest-cni-485099) Waiting to get IP...
	I0729 12:08:46.673953   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:46.674423   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:46.674514   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:46.674396   77402 retry.go:31] will retry after 306.309487ms: waiting for machine to come up
	I0729 12:08:46.982000   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:46.982543   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:46.982572   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:46.982502   77402 retry.go:31] will retry after 289.977251ms: waiting for machine to come up
	I0729 12:08:47.273912   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:47.274296   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:47.274329   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:47.274230   77402 retry.go:31] will retry after 464.597308ms: waiting for machine to come up
	I0729 12:08:47.740807   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:47.741277   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:47.741304   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:47.741225   77402 retry.go:31] will retry after 457.661408ms: waiting for machine to come up
	I0729 12:08:48.200839   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:48.201376   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:48.201397   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:48.201328   77402 retry.go:31] will retry after 519.551439ms: waiting for machine to come up
	I0729 12:08:48.721993   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:48.722488   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:48.722514   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:48.722465   77402 retry.go:31] will retry after 796.269012ms: waiting for machine to come up
	I0729 12:08:49.519762   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:49.520232   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:49.520271   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:49.520194   77402 retry.go:31] will retry after 791.553851ms: waiting for machine to come up
	I0729 12:08:50.313139   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:50.313644   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:50.313666   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:50.313611   77402 retry.go:31] will retry after 1.327456554s: waiting for machine to come up
	I0729 12:08:51.642453   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:51.642906   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:51.642934   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:51.642858   77402 retry.go:31] will retry after 1.759062046s: waiting for machine to come up
	I0729 12:08:53.403827   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:53.404282   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:53.404311   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:53.404247   77402 retry.go:31] will retry after 2.232869103s: waiting for machine to come up
	I0729 12:08:55.638722   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:55.639246   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:55.639271   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:55.639197   77402 retry.go:31] will retry after 2.478318383s: waiting for machine to come up
	I0729 12:08:58.119762   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:58.120305   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:58.120326   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:58.120257   77402 retry.go:31] will retry after 3.344807125s: waiting for machine to come up
	I0729 12:09:01.467199   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:01.467582   77368 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:09:01.467629   77368 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:09:01.467553   77402 retry.go:31] will retry after 3.51568297s: waiting for machine to come up
	I0729 12:09:04.985265   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:04.985726   77368 main.go:141] libmachine: (newest-cni-485099) Found IP for machine: 192.168.72.213
	I0729 12:09:04.985751   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has current primary IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:04.985760   77368 main.go:141] libmachine: (newest-cni-485099) Reserving static IP address...
	I0729 12:09:04.986191   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "newest-cni-485099", mac: "52:54:00:82:f5:00", ip: "192.168.72.213"} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:04.986217   77368 main.go:141] libmachine: (newest-cni-485099) Reserved static IP address: 192.168.72.213
	I0729 12:09:04.986235   77368 main.go:141] libmachine: (newest-cni-485099) DBG | skip adding static IP to network mk-newest-cni-485099 - found existing host DHCP lease matching {name: "newest-cni-485099", mac: "52:54:00:82:f5:00", ip: "192.168.72.213"}
	I0729 12:09:04.986257   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Getting to WaitForSSH function...
	I0729 12:09:04.986269   77368 main.go:141] libmachine: (newest-cni-485099) Waiting for SSH to be available...
	I0729 12:09:04.988584   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:04.988950   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:04.988978   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:04.989074   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Using SSH client type: external
	I0729 12:09:04.989102   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa (-rw-------)
	I0729 12:09:04.989132   77368 main.go:141] libmachine: (newest-cni-485099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 12:09:04.989145   77368 main.go:141] libmachine: (newest-cni-485099) DBG | About to run SSH command:
	I0729 12:09:04.989172   77368 main.go:141] libmachine: (newest-cni-485099) DBG | exit 0
	I0729 12:09:05.115037   77368 main.go:141] libmachine: (newest-cni-485099) DBG | SSH cmd err, output: <nil>: 
	I0729 12:09:05.115381   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetConfigRaw
	I0729 12:09:05.115941   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetIP
	I0729 12:09:05.118612   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.118965   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:05.118991   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.119246   77368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/config.json ...
	I0729 12:09:05.119423   77368 machine.go:94] provisionDockerMachine start ...
	I0729 12:09:05.119444   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:09:05.119642   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:05.121787   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.122114   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:05.122147   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.122263   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:05.122445   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:05.122581   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:05.122737   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:05.122910   77368 main.go:141] libmachine: Using SSH client type: native
	I0729 12:09:05.123151   77368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:09:05.123164   77368 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:09:05.231096   77368 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 12:09:05.231133   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetMachineName
	I0729 12:09:05.231378   77368 buildroot.go:166] provisioning hostname "newest-cni-485099"
	I0729 12:09:05.231398   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetMachineName
	I0729 12:09:05.231582   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:05.234201   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.234561   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:05.234601   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.234720   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:05.234926   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:05.235087   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:05.235211   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:05.235383   77368 main.go:141] libmachine: Using SSH client type: native
	I0729 12:09:05.235554   77368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:09:05.235577   77368 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-485099 && echo "newest-cni-485099" | sudo tee /etc/hostname
	I0729 12:09:05.359629   77368 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-485099
	
	I0729 12:09:05.359660   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:05.362478   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.362846   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:05.362866   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.363064   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:05.363259   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:05.363424   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:05.363529   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:05.363680   77368 main.go:141] libmachine: Using SSH client type: native
	I0729 12:09:05.363840   77368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:09:05.363855   77368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-485099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-485099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-485099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:09:05.481901   77368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:09:05.481929   77368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 12:09:05.481947   77368 buildroot.go:174] setting up certificates
	I0729 12:09:05.481976   77368 provision.go:84] configureAuth start
	I0729 12:09:05.481990   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetMachineName
	I0729 12:09:05.482275   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetIP
	I0729 12:09:05.484840   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.485249   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:05.485277   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.485446   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:05.487814   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.488159   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:05.488176   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.488323   77368 provision.go:143] copyHostCerts
	I0729 12:09:05.488386   77368 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 12:09:05.488400   77368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 12:09:05.488500   77368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 12:09:05.488611   77368 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 12:09:05.488621   77368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 12:09:05.488649   77368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 12:09:05.488699   77368 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 12:09:05.488706   77368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 12:09:05.488726   77368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 12:09:05.488768   77368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.newest-cni-485099 san=[127.0.0.1 192.168.72.213 localhost minikube newest-cni-485099]
	I0729 12:09:05.840667   77368 provision.go:177] copyRemoteCerts
	I0729 12:09:05.840725   77368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:09:05.840748   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:05.843386   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.843657   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:05.843678   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:05.843867   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:05.844087   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:05.844300   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:05.844425   77368 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:09:05.929826   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 12:09:05.955385   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 12:09:05.980591   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 12:09:06.004792   77368 provision.go:87] duration metric: took 522.80074ms to configureAuth
	I0729 12:09:06.004817   77368 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:09:06.005013   77368 config.go:182] Loaded profile config "newest-cni-485099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:09:06.005091   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:06.007727   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.008051   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:06.008077   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.008284   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:06.008504   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:06.008677   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:06.008814   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:06.009006   77368 main.go:141] libmachine: Using SSH client type: native
	I0729 12:09:06.009173   77368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:09:06.009189   77368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:09:06.284744   77368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:09:06.284773   77368 machine.go:97] duration metric: took 1.165334175s to provisionDockerMachine
	I0729 12:09:06.284799   77368 start.go:293] postStartSetup for "newest-cni-485099" (driver="kvm2")
	I0729 12:09:06.284816   77368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:09:06.284859   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:09:06.285206   77368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:09:06.285236   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:06.287677   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.288098   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:06.288133   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.288252   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:06.288450   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:06.288607   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:06.288709   77368 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:09:06.375880   77368 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:09:06.380755   77368 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:09:06.380786   77368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 12:09:06.380859   77368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 12:09:06.380978   77368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 12:09:06.381100   77368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:09:06.392914   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 12:09:06.419641   77368 start.go:296] duration metric: took 134.825988ms for postStartSetup
	I0729 12:09:06.419685   77368 fix.go:56] duration metric: took 21.022455898s for fixHost
	I0729 12:09:06.419708   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:06.422398   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.422740   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:06.422773   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.422924   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:06.423137   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:06.423315   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:06.423444   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:06.423602   77368 main.go:141] libmachine: Using SSH client type: native
	I0729 12:09:06.423754   77368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:09:06.423765   77368 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:09:06.531635   77368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722254946.489968075
	
	I0729 12:09:06.531663   77368 fix.go:216] guest clock: 1722254946.489968075
	I0729 12:09:06.531673   77368 fix.go:229] Guest: 2024-07-29 12:09:06.489968075 +0000 UTC Remote: 2024-07-29 12:09:06.419690103 +0000 UTC m=+21.171644324 (delta=70.277972ms)
	I0729 12:09:06.531698   77368 fix.go:200] guest clock delta is within tolerance: 70.277972ms
	I0729 12:09:06.531704   77368 start.go:83] releasing machines lock for "newest-cni-485099", held for 21.134490669s
	I0729 12:09:06.531726   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:09:06.531994   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetIP
	I0729 12:09:06.534851   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.535204   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:06.535230   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.535395   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:09:06.535918   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:09:06.536068   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:09:06.536125   77368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:09:06.536170   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:06.536318   77368 ssh_runner.go:195] Run: cat /version.json
	I0729 12:09:06.536337   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:06.538795   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.539114   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:06.539165   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.539188   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.539214   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:06.539383   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:06.539566   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:06.539680   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:06.539702   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:06.539707   77368 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:09:06.539876   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:06.540031   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:06.540245   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:06.540385   77368 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:09:06.652439   77368 ssh_runner.go:195] Run: systemctl --version
	I0729 12:09:06.658953   77368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:09:06.803088   77368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:09:06.809599   77368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:09:06.809660   77368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:09:06.826121   77368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 12:09:06.826150   77368 start.go:495] detecting cgroup driver to use...
	I0729 12:09:06.826221   77368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:09:06.842661   77368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:09:06.857527   77368 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:09:06.857629   77368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:09:06.872524   77368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:09:06.887796   77368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:09:07.010803   77368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:09:07.164896   77368 docker.go:233] disabling docker service ...
	I0729 12:09:07.164976   77368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:09:07.180935   77368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:09:07.195755   77368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:09:07.348576   77368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:09:07.475464   77368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:09:07.492397   77368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:09:07.513491   77368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 12:09:07.513554   77368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:09:07.526045   77368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:09:07.526102   77368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:09:07.538773   77368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:09:07.550208   77368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:09:07.561957   77368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:09:07.573402   77368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:09:07.584805   77368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:09:07.603815   77368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:09:07.614916   77368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:09:07.624625   77368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 12:09:07.624706   77368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 12:09:07.639637   77368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:09:07.650085   77368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:09:07.778898   77368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:09:07.919752   77368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:09:07.919826   77368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:09:07.926271   77368 start.go:563] Will wait 60s for crictl version
	I0729 12:09:07.926321   77368 ssh_runner.go:195] Run: which crictl
	I0729 12:09:07.930405   77368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:09:07.970015   77368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:09:07.970096   77368 ssh_runner.go:195] Run: crio --version
	I0729 12:09:07.998478   77368 ssh_runner.go:195] Run: crio --version
	I0729 12:09:08.028471   77368 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 12:09:08.029844   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetIP
	I0729 12:09:08.032507   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:08.032849   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:08.032874   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:08.033130   77368 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 12:09:08.037589   77368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:09:08.053383   77368 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0729 12:09:08.054763   77368 kubeadm.go:883] updating cluster {Name:newest-cni-485099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-485099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:09:08.054897   77368 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 12:09:08.054966   77368 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:09:08.095093   77368 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 12:09:08.095170   77368 ssh_runner.go:195] Run: which lz4
	I0729 12:09:08.099337   77368 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 12:09:08.103656   77368 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 12:09:08.103690   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0729 12:09:09.516548   77368 crio.go:462] duration metric: took 1.417246874s to copy over tarball
	I0729 12:09:09.516625   77368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 12:09:11.600404   77368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.083735629s)
	I0729 12:09:11.600437   77368 crio.go:469] duration metric: took 2.083859217s to extract the tarball
	I0729 12:09:11.600447   77368 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 12:09:11.638442   77368 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:09:11.685477   77368 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:09:11.685504   77368 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:09:11.685514   77368 kubeadm.go:934] updating node { 192.168.72.213 8443 v1.31.0-beta.0 crio true true} ...
	I0729 12:09:11.685640   77368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-485099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-485099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:09:11.685716   77368 ssh_runner.go:195] Run: crio config
	I0729 12:09:11.735998   77368 cni.go:84] Creating CNI manager for ""
	I0729 12:09:11.736022   77368 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:09:11.736034   77368 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0729 12:09:11.736054   77368 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.213 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-485099 NodeName:newest-cni-485099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.72.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:09:11.736210   77368 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-485099"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:09:11.736279   77368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 12:09:11.746274   77368 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:09:11.746347   77368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 12:09:11.755960   77368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0729 12:09:11.774574   77368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 12:09:11.792121   77368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0729 12:09:11.810242   77368 ssh_runner.go:195] Run: grep 192.168.72.213	control-plane.minikube.internal$ /etc/hosts
	I0729 12:09:11.814224   77368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:09:11.827260   77368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:09:11.957050   77368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:09:11.975308   77368 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099 for IP: 192.168.72.213
	I0729 12:09:11.975328   77368 certs.go:194] generating shared ca certs ...
	I0729 12:09:11.975343   77368 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:09:11.975488   77368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 12:09:11.975523   77368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 12:09:11.975532   77368 certs.go:256] generating profile certs ...
	I0729 12:09:11.975643   77368 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/client.key
	I0729 12:09:11.975726   77368 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.key.022f6aa0
	I0729 12:09:11.975768   77368 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.key
	I0729 12:09:11.975883   77368 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 12:09:11.975915   77368 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 12:09:11.975927   77368 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:09:11.975955   77368 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 12:09:11.975984   77368 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:09:11.976007   77368 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 12:09:11.976050   77368 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 12:09:11.976852   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:09:12.004009   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 12:09:12.044146   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:09:12.073336   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 12:09:12.123541   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 12:09:12.156049   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:09:12.191831   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:09:12.216119   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:09:12.240183   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 12:09:12.264825   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:09:12.290362   77368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 12:09:12.315083   77368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:09:12.333038   77368 ssh_runner.go:195] Run: openssl version
	I0729 12:09:12.339788   77368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 12:09:12.353151   77368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 12:09:12.358793   77368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 12:09:12.358878   77368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 12:09:12.366060   77368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:09:12.379033   77368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:09:12.390770   77368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:09:12.395520   77368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:09:12.395582   77368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:09:12.401383   77368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:09:12.414579   77368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 12:09:12.426298   77368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 12:09:12.430786   77368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 12:09:12.430831   77368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 12:09:12.436564   77368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 12:09:12.448361   77368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:09:12.453499   77368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:09:12.459922   77368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:09:12.466235   77368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:09:12.472536   77368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:09:12.479069   77368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:09:12.485634   77368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:09:12.492216   77368 kubeadm.go:392] StartCluster: {Name:newest-cni-485099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-485099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:09:12.492303   77368 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:09:12.492359   77368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:09:12.536454   77368 cri.go:89] found id: ""
	I0729 12:09:12.536538   77368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 12:09:12.547642   77368 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 12:09:12.547666   77368 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 12:09:12.547717   77368 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 12:09:12.558665   77368 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 12:09:12.559280   77368 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-485099" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 12:09:12.559532   77368 kubeconfig.go:62] /home/jenkins/minikube-integration/19337-3845/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-485099" cluster setting kubeconfig missing "newest-cni-485099" context setting]
	I0729 12:09:12.560027   77368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:09:12.561223   77368 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 12:09:12.572152   77368 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.213
	I0729 12:09:12.572191   77368 kubeadm.go:1160] stopping kube-system containers ...
	I0729 12:09:12.572204   77368 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 12:09:12.572259   77368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:09:12.610356   77368 cri.go:89] found id: ""
	I0729 12:09:12.610430   77368 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 12:09:12.627436   77368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 12:09:12.637744   77368 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 12:09:12.637769   77368 kubeadm.go:157] found existing configuration files:
	
	I0729 12:09:12.637818   77368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 12:09:12.648095   77368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 12:09:12.648171   77368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 12:09:12.658658   77368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 12:09:12.668876   77368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 12:09:12.668935   77368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 12:09:12.679307   77368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 12:09:12.689050   77368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 12:09:12.689105   77368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 12:09:12.699272   77368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 12:09:12.708672   77368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 12:09:12.708724   77368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 12:09:12.718659   77368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 12:09:12.728590   77368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:09:12.857239   77368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:09:13.821080   77368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:09:14.041354   77368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:09:14.110563   77368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:09:14.187940   77368 api_server.go:52] waiting for apiserver process to appear ...
	I0729 12:09:14.188040   77368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:09:14.689130   77368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:09:15.188551   77368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:09:15.218916   77368 api_server.go:72] duration metric: took 1.030972451s to wait for apiserver process to appear ...
	I0729 12:09:15.218948   77368 api_server.go:88] waiting for apiserver healthz status ...
	I0729 12:09:15.218970   77368 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0729 12:09:15.219491   77368 api_server.go:269] stopped: https://192.168.72.213:8443/healthz: Get "https://192.168.72.213:8443/healthz": dial tcp 192.168.72.213:8443: connect: connection refused
	I0729 12:09:15.719476   77368 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0729 12:09:17.993085   77368 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 12:09:17.993115   77368 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 12:09:17.993130   77368 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0729 12:09:18.012671   77368 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 12:09:18.012704   77368 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 12:09:18.219858   77368 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0729 12:09:18.231544   77368 api_server.go:279] https://192.168.72.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 12:09:18.231571   77368 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 12:09:18.719906   77368 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0729 12:09:18.730281   77368 api_server.go:279] https://192.168.72.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 12:09:18.730312   77368 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 12:09:19.219877   77368 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0729 12:09:19.224961   77368 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0729 12:09:19.231350   77368 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 12:09:19.231376   77368 api_server.go:131] duration metric: took 4.012420648s to wait for apiserver health ...
	I0729 12:09:19.231385   77368 cni.go:84] Creating CNI manager for ""
	I0729 12:09:19.231392   77368 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:09:19.233141   77368 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 12:09:19.234448   77368 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 12:09:19.247114   77368 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 12:09:19.269602   77368 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:09:19.280647   77368 system_pods.go:59] 8 kube-system pods found
	I0729 12:09:19.280679   77368 system_pods.go:61] "coredns-5cfdc65f69-vchdw" [a2bf7b2e-0e88-4e8a-a682-57ee957d5169] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 12:09:19.280687   77368 system_pods.go:61] "etcd-newest-cni-485099" [1d6b4e3a-a131-40fd-bf83-cca845dc8e27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 12:09:19.280695   77368 system_pods.go:61] "kube-apiserver-newest-cni-485099" [1024e047-86aa-4336-a521-07dded91efa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 12:09:19.280701   77368 system_pods.go:61] "kube-controller-manager-newest-cni-485099" [13c0b1cd-73cd-4455-a652-5d8fc11efa34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 12:09:19.280707   77368 system_pods.go:61] "kube-proxy-p6msd" [b333146b-57e5-488f-a818-e22377c59273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 12:09:19.280713   77368 system_pods.go:61] "kube-scheduler-newest-cni-485099" [917b6532-e2a9-483d-b86b-24ee5b97193a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 12:09:19.280718   77368 system_pods.go:61] "metrics-server-78fcd8795b-p8crn" [4733e72d-1e00-4b92-ad29-0a6a74e17770] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 12:09:19.280725   77368 system_pods.go:61] "storage-provisioner" [c2d1a918-446b-4523-9c18-13133a84c91a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 12:09:19.280734   77368 system_pods.go:74] duration metric: took 11.104297ms to wait for pod list to return data ...
	I0729 12:09:19.280742   77368 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:09:19.285157   77368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:09:19.285196   77368 node_conditions.go:123] node cpu capacity is 2
	I0729 12:09:19.285211   77368 node_conditions.go:105] duration metric: took 4.461927ms to run NodePressure ...
	I0729 12:09:19.285231   77368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:09:19.599726   77368 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 12:09:19.611721   77368 ops.go:34] apiserver oom_adj: -16
	I0729 12:09:19.611753   77368 kubeadm.go:597] duration metric: took 7.064073718s to restartPrimaryControlPlane
	I0729 12:09:19.611764   77368 kubeadm.go:394] duration metric: took 7.119555238s to StartCluster
	I0729 12:09:19.611783   77368 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:09:19.611876   77368 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 12:09:19.613069   77368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:09:19.613336   77368 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:09:19.613403   77368 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 12:09:19.613481   77368 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-485099"
	I0729 12:09:19.613503   77368 addons.go:69] Setting default-storageclass=true in profile "newest-cni-485099"
	I0729 12:09:19.613526   77368 addons.go:69] Setting metrics-server=true in profile "newest-cni-485099"
	I0729 12:09:19.613543   77368 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-485099"
	I0729 12:09:19.613549   77368 addons.go:234] Setting addon metrics-server=true in "newest-cni-485099"
	W0729 12:09:19.613558   77368 addons.go:243] addon metrics-server should already be in state true
	I0729 12:09:19.613589   77368 host.go:66] Checking if "newest-cni-485099" exists ...
	I0729 12:09:19.613594   77368 config.go:182] Loaded profile config "newest-cni-485099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:09:19.613512   77368 addons.go:69] Setting dashboard=true in profile "newest-cni-485099"
	I0729 12:09:19.613627   77368 addons.go:234] Setting addon dashboard=true in "newest-cni-485099"
	W0729 12:09:19.613639   77368 addons.go:243] addon dashboard should already be in state true
	I0729 12:09:19.613671   77368 host.go:66] Checking if "newest-cni-485099" exists ...
	I0729 12:09:19.613517   77368 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-485099"
	W0729 12:09:19.613706   77368 addons.go:243] addon storage-provisioner should already be in state true
	I0729 12:09:19.613747   77368 host.go:66] Checking if "newest-cni-485099" exists ...
	I0729 12:09:19.613980   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:09:19.614012   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:09:19.613980   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:09:19.614053   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:09:19.614102   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:09:19.614115   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:09:19.614132   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:09:19.614105   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:09:19.615037   77368 out.go:177] * Verifying Kubernetes components...
	I0729 12:09:19.616460   77368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:09:19.629812   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34763
	I0729 12:09:19.629834   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0729 12:09:19.629818   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42791
	I0729 12:09:19.630235   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:09:19.630251   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:09:19.630547   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:09:19.630769   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:09:19.630771   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:09:19.630784   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:09:19.630788   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:09:19.631177   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:09:19.631332   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:09:19.631357   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:09:19.631371   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:09:19.631565   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetState
	I0729 12:09:19.631687   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:09:19.631771   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:09:19.631819   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:09:19.632296   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:09:19.632328   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:09:19.632351   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0729 12:09:19.632690   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:09:19.633162   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:09:19.633187   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:09:19.633562   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:09:19.634191   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:09:19.634225   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:09:19.634595   77368 addons.go:234] Setting addon default-storageclass=true in "newest-cni-485099"
	W0729 12:09:19.634612   77368 addons.go:243] addon default-storageclass should already be in state true
	I0729 12:09:19.634639   77368 host.go:66] Checking if "newest-cni-485099" exists ...
	I0729 12:09:19.634945   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:09:19.634986   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:09:19.647648   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33559
	I0729 12:09:19.648408   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:09:19.648978   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:09:19.648995   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:09:19.649367   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:09:19.649536   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetState
	I0729 12:09:19.651205   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:09:19.652880   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0729 12:09:19.653321   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:09:19.653834   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:09:19.653852   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:09:19.654275   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0729 12:09:19.654371   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:09:19.654517   77368 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 12:09:19.654580   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetState
	I0729 12:09:19.654799   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:09:19.655454   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:09:19.655476   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:09:19.655773   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:09:19.655838   77368 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 12:09:19.655862   77368 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 12:09:19.655882   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:19.656287   77368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:09:19.656324   77368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:09:19.657193   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0729 12:09:19.657513   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:09:19.658009   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:09:19.658028   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:09:19.659130   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:09:19.659293   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:09:19.659574   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:19.659635   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetState
	I0729 12:09:19.660053   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:19.660080   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:19.660293   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:19.660524   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:19.660669   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:19.660811   77368 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:09:19.660995   77368 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 12:09:19.661242   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:09:19.662779   77368 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0729 12:09:19.662855   77368 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 12:09:19.662873   77368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 12:09:19.662887   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:19.665135   77368 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0729 12:09:19.665650   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:19.666021   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:19.666043   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:19.666221   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:19.666355   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:19.666387   77368 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0729 12:09:19.666398   77368 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0729 12:09:19.666415   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:19.666461   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:19.666581   77368 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:09:19.669622   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:19.669930   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:19.669955   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:19.670232   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:19.670431   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:19.670570   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:19.670711   77368 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:09:19.678019   77368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0729 12:09:19.678452   77368 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:09:19.679052   77368 main.go:141] libmachine: Using API Version  1
	I0729 12:09:19.679070   77368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:09:19.679449   77368 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:09:19.679615   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetState
	I0729 12:09:19.681216   77368 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:09:19.681408   77368 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 12:09:19.681422   77368 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 12:09:19.681433   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:09:19.684025   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:19.684441   77368 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:57 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:09:19.684468   77368 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:09:19.684672   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:09:19.684859   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:09:19.685027   77368 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:09:19.685174   77368 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:09:19.823994   77368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:09:19.842167   77368 api_server.go:52] waiting for apiserver process to appear ...
	I0729 12:09:19.842242   77368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:09:19.858190   77368 api_server.go:72] duration metric: took 244.819854ms to wait for apiserver process to appear ...
	I0729 12:09:19.858220   77368 api_server.go:88] waiting for apiserver healthz status ...
	I0729 12:09:19.858241   77368 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0729 12:09:19.862965   77368 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0729 12:09:19.863840   77368 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 12:09:19.863857   77368 api_server.go:131] duration metric: took 5.630013ms to wait for apiserver health ...
	I0729 12:09:19.863865   77368 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:09:19.869240   77368 system_pods.go:59] 8 kube-system pods found
	I0729 12:09:19.869270   77368 system_pods.go:61] "coredns-5cfdc65f69-vchdw" [a2bf7b2e-0e88-4e8a-a682-57ee957d5169] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 12:09:19.869281   77368 system_pods.go:61] "etcd-newest-cni-485099" [1d6b4e3a-a131-40fd-bf83-cca845dc8e27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 12:09:19.869293   77368 system_pods.go:61] "kube-apiserver-newest-cni-485099" [1024e047-86aa-4336-a521-07dded91efa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 12:09:19.869303   77368 system_pods.go:61] "kube-controller-manager-newest-cni-485099" [13c0b1cd-73cd-4455-a652-5d8fc11efa34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 12:09:19.869310   77368 system_pods.go:61] "kube-proxy-p6msd" [b333146b-57e5-488f-a818-e22377c59273] Running
	I0729 12:09:19.869322   77368 system_pods.go:61] "kube-scheduler-newest-cni-485099" [917b6532-e2a9-483d-b86b-24ee5b97193a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 12:09:19.869332   77368 system_pods.go:61] "metrics-server-78fcd8795b-p8crn" [4733e72d-1e00-4b92-ad29-0a6a74e17770] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 12:09:19.869341   77368 system_pods.go:61] "storage-provisioner" [c2d1a918-446b-4523-9c18-13133a84c91a] Running
	I0729 12:09:19.869351   77368 system_pods.go:74] duration metric: took 5.479128ms to wait for pod list to return data ...
	I0729 12:09:19.869363   77368 default_sa.go:34] waiting for default service account to be created ...
	I0729 12:09:19.872281   77368 default_sa.go:45] found service account: "default"
	I0729 12:09:19.872300   77368 default_sa.go:55] duration metric: took 2.928467ms for default service account to be created ...
	I0729 12:09:19.872309   77368 kubeadm.go:582] duration metric: took 258.943123ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 12:09:19.872321   77368 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:09:19.874941   77368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:09:19.874957   77368 node_conditions.go:123] node cpu capacity is 2
	I0729 12:09:19.874965   77368 node_conditions.go:105] duration metric: took 2.639511ms to run NodePressure ...
	I0729 12:09:19.874975   77368 start.go:241] waiting for startup goroutines ...
	I0729 12:09:19.910958   77368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 12:09:19.930204   77368 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0729 12:09:19.930239   77368 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0729 12:09:19.972477   77368 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0729 12:09:19.972503   77368 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0729 12:09:20.030553   77368 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0729 12:09:20.030581   77368 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0729 12:09:20.035322   77368 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 12:09:20.035345   77368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 12:09:20.059875   77368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 12:09:20.112283   77368 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0729 12:09:20.112315   77368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0729 12:09:20.121271   77368 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 12:09:20.121307   77368 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 12:09:20.188365   77368 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 12:09:20.188394   77368 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 12:09:20.193581   77368 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0729 12:09:20.193607   77368 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0729 12:09:20.227078   77368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 12:09:20.318413   77368 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0729 12:09:20.318439   77368 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0729 12:09:20.503481   77368 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0729 12:09:20.503509   77368 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0729 12:09:20.534229   77368 main.go:141] libmachine: Making call to close driver server
	I0729 12:09:20.534250   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Close
	I0729 12:09:20.534547   77368 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:09:20.534562   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Closing plugin on server side
	I0729 12:09:20.534571   77368 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:09:20.534580   77368 main.go:141] libmachine: Making call to close driver server
	I0729 12:09:20.534587   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Close
	I0729 12:09:20.534833   77368 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:09:20.534849   77368 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:09:20.534833   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Closing plugin on server side
	I0729 12:09:20.547228   77368 main.go:141] libmachine: Making call to close driver server
	I0729 12:09:20.547251   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Close
	I0729 12:09:20.547511   77368 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:09:20.547528   77368 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:09:20.605068   77368 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0729 12:09:20.605090   77368 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0729 12:09:20.637290   77368 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0729 12:09:20.637321   77368 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0729 12:09:20.690742   77368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0729 12:09:21.836200   77368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.776287671s)
	I0729 12:09:21.836260   77368 main.go:141] libmachine: Making call to close driver server
	I0729 12:09:21.836273   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Close
	I0729 12:09:21.836299   77368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.609190854s)
	I0729 12:09:21.836338   77368 main.go:141] libmachine: Making call to close driver server
	I0729 12:09:21.836355   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Close
	I0729 12:09:21.836701   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Closing plugin on server side
	I0729 12:09:21.836711   77368 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:09:21.836729   77368 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:09:21.836736   77368 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:09:21.836736   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Closing plugin on server side
	I0729 12:09:21.836747   77368 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:09:21.836750   77368 main.go:141] libmachine: Making call to close driver server
	I0729 12:09:21.836759   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Close
	I0729 12:09:21.836768   77368 main.go:141] libmachine: Making call to close driver server
	I0729 12:09:21.836777   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Close
	I0729 12:09:21.836994   77368 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:09:21.837077   77368 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:09:21.837005   77368 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:09:21.837103   77368 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:09:21.837111   77368 addons.go:475] Verifying addon metrics-server=true in "newest-cni-485099"
	I0729 12:09:21.837025   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Closing plugin on server side
	I0729 12:09:21.837134   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Closing plugin on server side
	I0729 12:09:22.051381   77368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.360585334s)
	I0729 12:09:22.051433   77368 main.go:141] libmachine: Making call to close driver server
	I0729 12:09:22.051445   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Close
	I0729 12:09:22.051754   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Closing plugin on server side
	I0729 12:09:22.051765   77368 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:09:22.051779   77368 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:09:22.051792   77368 main.go:141] libmachine: Making call to close driver server
	I0729 12:09:22.051804   77368 main.go:141] libmachine: (newest-cni-485099) Calling .Close
	I0729 12:09:22.052144   77368 main.go:141] libmachine: (newest-cni-485099) DBG | Closing plugin on server side
	I0729 12:09:22.052147   77368 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:09:22.052198   77368 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:09:22.053655   77368 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-485099 addons enable metrics-server
	
	I0729 12:09:22.055013   77368 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0729 12:09:22.056386   77368 addons.go:510] duration metric: took 2.44298544s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0729 12:09:22.056426   77368 start.go:246] waiting for cluster config update ...
	I0729 12:09:22.056439   77368 start.go:255] writing updated cluster config ...
	I0729 12:09:22.056734   77368 ssh_runner.go:195] Run: rm -f paused
	I0729 12:09:22.104356   77368 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 12:09:22.105803   77368 out.go:177] * Done! kubectl is now configured to use "newest-cni-485099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.497008995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254992496984946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40d2d9e2-9c4d-4d4f-94b1-fb35abf8436f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.497615160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df01c50a-2b0f-41ce-bba5-df962c046855 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.497663311Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df01c50a-2b0f-41ce-bba5-df962c046855 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.497839850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2548381a4637ac98fd22c06d25f8d5e22cf98f860a519fb015f39ea86567e99f,PodSandboxId:fc62c0b588aa7e9360f5e2d2bd7fcbb5831c62778235d28ac1116339aca2968b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253981784726638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5b7866-e0f0-4f25-a9d9-0eba38db9e76,},Annotations:map[string]string{io.kubernetes.container.hash: 62d389d4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bb20ba9479b3b1a793ab8227ccb4677960dedd830fc61fb840bbbd1109298a,PodSandboxId:f2fc618fbe77c03e0457127c411ae9779e8f175896c4a782dc37831040af25bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980816486562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zl6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1182fef3-3604-44e8-b428-97677c9b1e72,},Annotations:map[string]string{io.kubernetes.container.hash: 647cdee1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ef4e8748fa949f582efd65be5571462046812cdfa90b29c6b5240da694f63d,PodSandboxId:1af5452099f0f6b0c41a1b64f1f45a96fbcfcffface147c8d32f1084b4c05721,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980730178617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbcqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0e2834a2-a70d-4770-9f11-679f711a0207,},Annotations:map[string]string{io.kubernetes.container.hash: 8b95219e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd5708a054999c6ee13cc327c4d6922f140dc1bc5cc524d11457676954b28aa,PodSandboxId:78c72d80c552f2ca78c3f47eaef55d343aafd3d017e41a379a14b895160778ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722253979780671629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gkd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6699fd97-db3a-4ad9-911e-637b6401ba46,},Annotations:map[string]string{io.kubernetes.container.hash: 6bca9732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df8f676fc0fb26d0ed2616b226df641b93640d475fab5670b57b27d5ce63157b,PodSandboxId:4d87784eb928ac66d1bfe607d53025d0e52180b2b2766c558d323bec0e97b546,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253960129192327,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c15d3319712b79163fc19ca44c59aba0,},Annotations:map[string]string{io.kubernetes.container.hash: c7f54f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eedcbcfb43e07be1975870cd0b80f100cd159ceed323091fcfb990b0ee5d8f9b,PodSandboxId:c31d3dd5298f3f3bf1de00cbc257f13727ffdf3fe81464c9a1b2ceeb8bde1b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253960163726774,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27b76178b47939f93fc1d48704ba2f37,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bba7db5b3ea6dc447a15455dde54c6e86f41acb55ddad7797709becaf8a1c,PodSandboxId:5ee74588375fcf14a4bfb37999c8cf22ce17a7641845a53d43e9e74fe8bc0e83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253960084081570,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6bbcf80b45c58306dbbe42a634562a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d436678cc067136b81bf3d2656543dbef39d25fa24f17f5061972a0a9fd61a6,PodSandboxId:0fed01a0da3e0d2b9d8079f95ce2bcd779d34a2e4dea5d122571aaca744e2475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253960057051949,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e218a76b4d6e35d9928c77efd7ba3b21,},Annotations:map[string]string{io.kubernetes.container.hash: e6d1e29d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df01c50a-2b0f-41ce-bba5-df962c046855 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.535393237Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=189a02b7-1e3d-4841-a4e8-806d14f717eb name=/runtime.v1.RuntimeService/Version
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.535642158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=189a02b7-1e3d-4841-a4e8-806d14f717eb name=/runtime.v1.RuntimeService/Version
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.536876153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69948e0a-7bad-4bc6-9dc7-70e481f85867 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.537293585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254992537267340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69948e0a-7bad-4bc6-9dc7-70e481f85867 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.537926341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6aa1560f-6114-4cad-8018-d527055c3463 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.537979806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aa1560f-6114-4cad-8018-d527055c3463 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.538168889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2548381a4637ac98fd22c06d25f8d5e22cf98f860a519fb015f39ea86567e99f,PodSandboxId:fc62c0b588aa7e9360f5e2d2bd7fcbb5831c62778235d28ac1116339aca2968b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253981784726638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5b7866-e0f0-4f25-a9d9-0eba38db9e76,},Annotations:map[string]string{io.kubernetes.container.hash: 62d389d4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bb20ba9479b3b1a793ab8227ccb4677960dedd830fc61fb840bbbd1109298a,PodSandboxId:f2fc618fbe77c03e0457127c411ae9779e8f175896c4a782dc37831040af25bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980816486562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zl6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1182fef3-3604-44e8-b428-97677c9b1e72,},Annotations:map[string]string{io.kubernetes.container.hash: 647cdee1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ef4e8748fa949f582efd65be5571462046812cdfa90b29c6b5240da694f63d,PodSandboxId:1af5452099f0f6b0c41a1b64f1f45a96fbcfcffface147c8d32f1084b4c05721,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980730178617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbcqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0e2834a2-a70d-4770-9f11-679f711a0207,},Annotations:map[string]string{io.kubernetes.container.hash: 8b95219e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd5708a054999c6ee13cc327c4d6922f140dc1bc5cc524d11457676954b28aa,PodSandboxId:78c72d80c552f2ca78c3f47eaef55d343aafd3d017e41a379a14b895160778ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722253979780671629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gkd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6699fd97-db3a-4ad9-911e-637b6401ba46,},Annotations:map[string]string{io.kubernetes.container.hash: 6bca9732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df8f676fc0fb26d0ed2616b226df641b93640d475fab5670b57b27d5ce63157b,PodSandboxId:4d87784eb928ac66d1bfe607d53025d0e52180b2b2766c558d323bec0e97b546,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253960129192327,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c15d3319712b79163fc19ca44c59aba0,},Annotations:map[string]string{io.kubernetes.container.hash: c7f54f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eedcbcfb43e07be1975870cd0b80f100cd159ceed323091fcfb990b0ee5d8f9b,PodSandboxId:c31d3dd5298f3f3bf1de00cbc257f13727ffdf3fe81464c9a1b2ceeb8bde1b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253960163726774,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27b76178b47939f93fc1d48704ba2f37,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bba7db5b3ea6dc447a15455dde54c6e86f41acb55ddad7797709becaf8a1c,PodSandboxId:5ee74588375fcf14a4bfb37999c8cf22ce17a7641845a53d43e9e74fe8bc0e83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253960084081570,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6bbcf80b45c58306dbbe42a634562a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d436678cc067136b81bf3d2656543dbef39d25fa24f17f5061972a0a9fd61a6,PodSandboxId:0fed01a0da3e0d2b9d8079f95ce2bcd779d34a2e4dea5d122571aaca744e2475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253960057051949,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e218a76b4d6e35d9928c77efd7ba3b21,},Annotations:map[string]string{io.kubernetes.container.hash: e6d1e29d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6aa1560f-6114-4cad-8018-d527055c3463 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.575800333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29f597bc-f8ac-4575-a0ff-a70845616d07 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.575883401Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29f597bc-f8ac-4575-a0ff-a70845616d07 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.577294108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8da968f8-e812-492d-9271-2feaef7be4d4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.577921830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254992577863555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8da968f8-e812-492d-9271-2feaef7be4d4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.578392409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=632a35b1-a8e6-4567-802a-de3f749649c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.578442895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=632a35b1-a8e6-4567-802a-de3f749649c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.578701656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2548381a4637ac98fd22c06d25f8d5e22cf98f860a519fb015f39ea86567e99f,PodSandboxId:fc62c0b588aa7e9360f5e2d2bd7fcbb5831c62778235d28ac1116339aca2968b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253981784726638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5b7866-e0f0-4f25-a9d9-0eba38db9e76,},Annotations:map[string]string{io.kubernetes.container.hash: 62d389d4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bb20ba9479b3b1a793ab8227ccb4677960dedd830fc61fb840bbbd1109298a,PodSandboxId:f2fc618fbe77c03e0457127c411ae9779e8f175896c4a782dc37831040af25bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980816486562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zl6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1182fef3-3604-44e8-b428-97677c9b1e72,},Annotations:map[string]string{io.kubernetes.container.hash: 647cdee1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ef4e8748fa949f582efd65be5571462046812cdfa90b29c6b5240da694f63d,PodSandboxId:1af5452099f0f6b0c41a1b64f1f45a96fbcfcffface147c8d32f1084b4c05721,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980730178617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbcqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0e2834a2-a70d-4770-9f11-679f711a0207,},Annotations:map[string]string{io.kubernetes.container.hash: 8b95219e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd5708a054999c6ee13cc327c4d6922f140dc1bc5cc524d11457676954b28aa,PodSandboxId:78c72d80c552f2ca78c3f47eaef55d343aafd3d017e41a379a14b895160778ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722253979780671629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gkd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6699fd97-db3a-4ad9-911e-637b6401ba46,},Annotations:map[string]string{io.kubernetes.container.hash: 6bca9732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df8f676fc0fb26d0ed2616b226df641b93640d475fab5670b57b27d5ce63157b,PodSandboxId:4d87784eb928ac66d1bfe607d53025d0e52180b2b2766c558d323bec0e97b546,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253960129192327,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c15d3319712b79163fc19ca44c59aba0,},Annotations:map[string]string{io.kubernetes.container.hash: c7f54f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eedcbcfb43e07be1975870cd0b80f100cd159ceed323091fcfb990b0ee5d8f9b,PodSandboxId:c31d3dd5298f3f3bf1de00cbc257f13727ffdf3fe81464c9a1b2ceeb8bde1b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253960163726774,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27b76178b47939f93fc1d48704ba2f37,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bba7db5b3ea6dc447a15455dde54c6e86f41acb55ddad7797709becaf8a1c,PodSandboxId:5ee74588375fcf14a4bfb37999c8cf22ce17a7641845a53d43e9e74fe8bc0e83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253960084081570,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6bbcf80b45c58306dbbe42a634562a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d436678cc067136b81bf3d2656543dbef39d25fa24f17f5061972a0a9fd61a6,PodSandboxId:0fed01a0da3e0d2b9d8079f95ce2bcd779d34a2e4dea5d122571aaca744e2475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253960057051949,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e218a76b4d6e35d9928c77efd7ba3b21,},Annotations:map[string]string{io.kubernetes.container.hash: e6d1e29d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=632a35b1-a8e6-4567-802a-de3f749649c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.613111381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b42135a-5013-4e81-91f6-cd7f724da5a3 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.613201205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b42135a-5013-4e81-91f6-cd7f724da5a3 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.614381311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=421b3980-5add-44ea-b2df-2ab5d2b6d89c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.614903250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254992614879505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=421b3980-5add-44ea-b2df-2ab5d2b6d89c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.615497078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5529e0ef-acba-45e3-ae4b-6720b5ebcaae name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.615601495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5529e0ef-acba-45e3-ae4b-6720b5ebcaae name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:09:52 default-k8s-diff-port-754486 crio[723]: time="2024-07-29 12:09:52.615788702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2548381a4637ac98fd22c06d25f8d5e22cf98f860a519fb015f39ea86567e99f,PodSandboxId:fc62c0b588aa7e9360f5e2d2bd7fcbb5831c62778235d28ac1116339aca2968b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253981784726638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5b7866-e0f0-4f25-a9d9-0eba38db9e76,},Annotations:map[string]string{io.kubernetes.container.hash: 62d389d4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bb20ba9479b3b1a793ab8227ccb4677960dedd830fc61fb840bbbd1109298a,PodSandboxId:f2fc618fbe77c03e0457127c411ae9779e8f175896c4a782dc37831040af25bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980816486562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zl6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1182fef3-3604-44e8-b428-97677c9b1e72,},Annotations:map[string]string{io.kubernetes.container.hash: 647cdee1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ef4e8748fa949f582efd65be5571462046812cdfa90b29c6b5240da694f63d,PodSandboxId:1af5452099f0f6b0c41a1b64f1f45a96fbcfcffface147c8d32f1084b4c05721,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253980730178617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fbcqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0e2834a2-a70d-4770-9f11-679f711a0207,},Annotations:map[string]string{io.kubernetes.container.hash: 8b95219e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd5708a054999c6ee13cc327c4d6922f140dc1bc5cc524d11457676954b28aa,PodSandboxId:78c72d80c552f2ca78c3f47eaef55d343aafd3d017e41a379a14b895160778ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722253979780671629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gkd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6699fd97-db3a-4ad9-911e-637b6401ba46,},Annotations:map[string]string{io.kubernetes.container.hash: 6bca9732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df8f676fc0fb26d0ed2616b226df641b93640d475fab5670b57b27d5ce63157b,PodSandboxId:4d87784eb928ac66d1bfe607d53025d0e52180b2b2766c558d323bec0e97b546,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253960129192327,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c15d3319712b79163fc19ca44c59aba0,},Annotations:map[string]string{io.kubernetes.container.hash: c7f54f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eedcbcfb43e07be1975870cd0b80f100cd159ceed323091fcfb990b0ee5d8f9b,PodSandboxId:c31d3dd5298f3f3bf1de00cbc257f13727ffdf3fe81464c9a1b2ceeb8bde1b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253960163726774,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27b76178b47939f93fc1d48704ba2f37,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bba7db5b3ea6dc447a15455dde54c6e86f41acb55ddad7797709becaf8a1c,PodSandboxId:5ee74588375fcf14a4bfb37999c8cf22ce17a7641845a53d43e9e74fe8bc0e83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253960084081570,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6bbcf80b45c58306dbbe42a634562a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d436678cc067136b81bf3d2656543dbef39d25fa24f17f5061972a0a9fd61a6,PodSandboxId:0fed01a0da3e0d2b9d8079f95ce2bcd779d34a2e4dea5d122571aaca744e2475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253960057051949,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e218a76b4d6e35d9928c77efd7ba3b21,},Annotations:map[string]string{io.kubernetes.container.hash: e6d1e29d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5529e0ef-acba-45e3-ae4b-6720b5ebcaae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2548381a4637a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   fc62c0b588aa7       storage-provisioner
	43bb20ba9479b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   f2fc618fbe77c       coredns-7db6d8ff4d-4zl6p
	f2ef4e8748fa9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   1af5452099f0f       coredns-7db6d8ff4d-fbcqh
	4fd5708a05499       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   78c72d80c552f       kube-proxy-7gkd8
	eedcbcfb43e07       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   17 minutes ago      Running             kube-scheduler            2                   c31d3dd5298f3       kube-scheduler-default-k8s-diff-port-754486
	df8f676fc0fb2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   17 minutes ago      Running             etcd                      2                   4d87784eb928a       etcd-default-k8s-diff-port-754486
	0f3bba7db5b3e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   17 minutes ago      Running             kube-controller-manager   2                   5ee74588375fc       kube-controller-manager-default-k8s-diff-port-754486
	5d436678cc067       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   17 minutes ago      Running             kube-apiserver            2                   0fed01a0da3e0       kube-apiserver-default-k8s-diff-port-754486
	
	
	==> coredns [43bb20ba9479b3b1a793ab8227ccb4677960dedd830fc61fb840bbbd1109298a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f2ef4e8748fa949f582efd65be5571462046812cdfa90b29c6b5240da694f63d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-754486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-754486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=default-k8s-diff-port-754486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_52_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-754486
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:09:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:08:26 +0000   Mon, 29 Jul 2024 11:52:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:08:26 +0000   Mon, 29 Jul 2024 11:52:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:08:26 +0000   Mon, 29 Jul 2024 11:52:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:08:26 +0000   Mon, 29 Jul 2024 11:52:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.111
	  Hostname:    default-k8s-diff-port-754486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d99557284a0142b5a46816e2f198f833
	  System UUID:                d9955728-4a01-42b5-a468-16e2f198f833
	  Boot ID:                    76398773-4aec-4953-b7f8-29c936d15aff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-4zl6p                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-fbcqh                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-754486                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-754486             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-754486    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-7gkd8                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-754486             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-569cc877fc-rgzfc                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node default-k8s-diff-port-754486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-754486 event: Registered Node default-k8s-diff-port-754486 in Controller
	
	
	==> dmesg <==
	[  +0.050893] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042268] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.953820] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.551385] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.584252] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.143496] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.058760] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065046] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.174638] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.149018] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.331400] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.601861] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.061440] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.847386] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +5.677785] kauditd_printk_skb: 97 callbacks suppressed
	[Jul29 11:48] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 11:52] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.763542] systemd-fstab-generator[3586]: Ignoring "noauto" option for root device
	[  +4.878992] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.193480] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[ +13.909823] systemd-fstab-generator[4112]: Ignoring "noauto" option for root device
	[  +0.085256] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 11:54] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [df8f676fc0fb26d0ed2616b226df641b93640d475fab5670b57b27d5ce63157b] <==
	{"level":"info","ts":"2024-07-29T11:52:41.243942Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:52:41.246655Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:52:41.246746Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:52:41.246824Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d094f4edb090bf55","local-member-id":"56d480afbf0abc79","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:41.246938Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:41.246975Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:52:41.246986Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:52:41.257963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.111:2379"}
	{"level":"info","ts":"2024-07-29T11:52:41.296061Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T12:02:41.374237Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":716}
	{"level":"info","ts":"2024-07-29T12:02:41.390338Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":716,"took":"15.569151ms","hash":782787934,"current-db-size-bytes":2248704,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2248704,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-29T12:02:41.390399Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":782787934,"revision":716,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T12:07:41.386015Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":960}
	{"level":"info","ts":"2024-07-29T12:07:41.390511Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":960,"took":"3.82927ms","hash":3708222958,"current-db-size-bytes":2248704,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T12:07:41.390656Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3708222958,"revision":960,"compact-revision":716}
	{"level":"warn","ts":"2024-07-29T12:08:20.520767Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.147304ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13581045573349634269 > lease_revoke:<id:3c7990fe55e5708a>","response":"size:28"}
	{"level":"info","ts":"2024-07-29T12:08:21.089133Z","caller":"traceutil/trace.go:171","msg":"trace[1921884777] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"160.405675ms","start":"2024-07-29T12:08:20.928674Z","end":"2024-07-29T12:08:21.08908Z","steps":["trace[1921884777] 'process raft request'  (duration: 160.229732ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:08:21.980847Z","caller":"traceutil/trace.go:171","msg":"trace[1354935541] transaction","detail":"{read_only:false; response_revision:1237; number_of_response:1; }","duration":"101.786293ms","start":"2024-07-29T12:08:21.879043Z","end":"2024-07-29T12:08:21.980829Z","steps":["trace[1354935541] 'process raft request'  (duration: 101.648697ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:09:13.597353Z","caller":"traceutil/trace.go:171","msg":"trace[2042228411] transaction","detail":"{read_only:false; response_revision:1279; number_of_response:1; }","duration":"124.455242ms","start":"2024-07-29T12:09:13.472862Z","end":"2024-07-29T12:09:13.597317Z","steps":["trace[2042228411] 'process raft request'  (duration: 124.047026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:09:13.970646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.063681ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T12:09:13.974016Z","caller":"traceutil/trace.go:171","msg":"trace[1482066209] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:1279; }","duration":"202.577085ms","start":"2024-07-29T12:09:13.771416Z","end":"2024-07-29T12:09:13.973993Z","steps":["trace[1482066209] 'count revisions from in-memory index tree'  (duration: 198.99831ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:09:14.231972Z","caller":"traceutil/trace.go:171","msg":"trace[1653759470] linearizableReadLoop","detail":"{readStateIndex:1493; appliedIndex:1492; }","duration":"111.937449ms","start":"2024-07-29T12:09:14.120015Z","end":"2024-07-29T12:09:14.231952Z","steps":["trace[1653759470] 'read index received'  (duration: 111.716123ms)","trace[1653759470] 'applied index is now lower than readState.Index'  (duration: 220.163µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:09:14.23213Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.094531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T12:09:14.232201Z","caller":"traceutil/trace.go:171","msg":"trace[1874042059] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1280; }","duration":"112.18122ms","start":"2024-07-29T12:09:14.120008Z","end":"2024-07-29T12:09:14.23219Z","steps":["trace[1874042059] 'agreement among raft nodes before linearized reading'  (duration: 112.049152ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:09:14.232494Z","caller":"traceutil/trace.go:171","msg":"trace[1189860383] transaction","detail":"{read_only:false; response_revision:1280; number_of_response:1; }","duration":"243.074271ms","start":"2024-07-29T12:09:13.989405Z","end":"2024-07-29T12:09:14.23248Z","steps":["trace[1189860383] 'process raft request'  (duration: 242.380198ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:09:52 up 22 min,  0 users,  load average: 0.34, 0.24, 0.19
	Linux default-k8s-diff-port-754486 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5d436678cc067136b81bf3d2656543dbef39d25fa24f17f5061972a0a9fd61a6] <==
	I0729 12:03:44.179249       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:05:44.178887       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:05:44.178987       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 12:05:44.178998       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:05:44.180053       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:05:44.180204       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 12:05:44.180242       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:07:43.182114       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:07:43.182411       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 12:07:44.183505       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:07:44.183659       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 12:07:44.183730       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:07:44.183663       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:07:44.184027       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 12:07:44.185283       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:08:44.184487       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:08:44.184867       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 12:08:44.184904       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:08:44.185793       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 12:08:44.185875       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 12:08:44.185928       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0f3bba7db5b3ea6dc447a15455dde54c6e86f41acb55ddad7797709becaf8a1c] <==
	I0729 12:04:14.990875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="77.258µs"
	E0729 12:04:28.780947       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:04:29.290851       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:04:58.786304       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:04:59.300649       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:05:28.792433       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:05:29.308642       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:05:58.799071       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:05:59.316490       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:06:28.804721       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:06:29.326457       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:06:58.811387       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:06:59.335703       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:07:28.817878       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:07:29.345860       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:07:58.823780       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:07:59.355654       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:08:28.829394       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:08:29.364800       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:08:58.835338       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:08:59.373115       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 12:09:14.238713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="534.359µs"
	I0729 12:09:26.986183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="122.174µs"
	E0729 12:09:28.839954       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 12:09:29.381396       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4fd5708a054999c6ee13cc327c4d6922f140dc1bc5cc524d11457676954b28aa] <==
	I0729 11:53:00.034338       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:53:00.052328       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.111"]
	I0729 11:53:00.163925       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:53:00.163962       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:53:00.163978       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:53:00.174728       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:53:00.174932       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:53:00.174962       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:53:00.176020       1 config.go:192] "Starting service config controller"
	I0729 11:53:00.176045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:53:00.176092       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:53:00.176096       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:53:00.180405       1 config.go:319] "Starting node config controller"
	I0729 11:53:00.180418       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:53:00.277051       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:53:00.277152       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:53:00.280514       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [eedcbcfb43e07be1975870cd0b80f100cd159ceed323091fcfb990b0ee5d8f9b] <==
	W0729 11:52:44.112760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:52:44.112894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 11:52:44.133346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:44.133500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:44.180985       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:52:44.181210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:52:44.204215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:52:44.204309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:52:44.308317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:52:44.308411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:52:44.408187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:44.408237       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:44.432330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:52:44.434162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:52:44.450876       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:52:44.451031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 11:52:44.451177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:52:44.451261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 11:52:44.496771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:52:44.496914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:52:44.527924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:52:44.528025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 11:52:44.658045       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:52:44.658114       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 11:52:46.478527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:07:46 default-k8s-diff-port-754486 kubelet[3920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:07:46 default-k8s-diff-port-754486 kubelet[3920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:07:53 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:07:53.973482    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:08:07 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:08:07.972920    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:08:21 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:08:21.973424    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:08:35 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:08:35.974813    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:08:46 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:08:46.006484    3920 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:08:46 default-k8s-diff-port-754486 kubelet[3920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:08:46 default-k8s-diff-port-754486 kubelet[3920]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:08:46 default-k8s-diff-port-754486 kubelet[3920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:08:46 default-k8s-diff-port-754486 kubelet[3920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:08:47 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:08:47.971700    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:09:02 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:09:02.987314    3920 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 12:09:02 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:09:02.987792    3920 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 12:09:02 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:09:02.988379    3920 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lnp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-rgzfc_kube-system(cc8f9151-b09f-4a1d-95bc-2e271bbf24e4): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 29 12:09:02 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:09:02.988641    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:09:13 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:09:13.973352    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:09:26 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:09:26.972164    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:09:39 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:09:39.976264    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	Jul 29 12:09:46 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:09:46.005405    3920 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:09:46 default-k8s-diff-port-754486 kubelet[3920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:09:46 default-k8s-diff-port-754486 kubelet[3920]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:09:46 default-k8s-diff-port-754486 kubelet[3920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:09:46 default-k8s-diff-port-754486 kubelet[3920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:09:52 default-k8s-diff-port-754486 kubelet[3920]: E0729 12:09:52.972428    3920 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rgzfc" podUID="cc8f9151-b09f-4a1d-95bc-2e271bbf24e4"
	
	
	==> storage-provisioner [2548381a4637ac98fd22c06d25f8d5e22cf98f860a519fb015f39ea86567e99f] <==
	I0729 11:53:01.894174       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:53:01.907339       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:53:01.907483       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:53:01.918488       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:53:01.918810       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-754486_9e96901f-bf90-47f8-ae14-c72364303655!
	I0729 11:53:01.922051       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0660db1c-300d-466a-9dbd-76ccadc16e39", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-754486_9e96901f-bf90-47f8-ae14-c72364303655 became leader
	I0729 11:53:02.021065       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-754486_9e96901f-bf90-47f8-ae14-c72364303655!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-754486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-rgzfc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-754486 describe pod metrics-server-569cc877fc-rgzfc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-754486 describe pod metrics-server-569cc877fc-rgzfc: exit status 1 (59.283599ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-rgzfc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-754486 describe pod metrics-server-569cc877fc-rgzfc: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (465.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (336.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-297799 -n no-preload-297799
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 12:08:25.389937271 +0000 UTC m=+6483.473665763
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-297799 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-297799 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.512µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-297799 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-297799 -n no-preload-297799
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-297799 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-297799 logs -n 25: (1.377228258s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo find                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo crio                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184479                                       | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-574387 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | disable-driver-mounts-574387                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:41 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-297799             | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-297799                                   | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-731235            | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC | 29 Jul 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-754486  | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC | 29 Jul 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-297799                  | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-297799 --memory=2200                     | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC | 29 Jul 24 11:53 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-188043        | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-731235                 | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC | 29 Jul 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754486       | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:53 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-188043             | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	| start   | -p newest-cni-485099 --memory=2200 --alsologtostderr   | newest-cni-485099            | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:07:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:07:48.897473   76627 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:07:48.897693   76627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:07:48.897705   76627 out.go:304] Setting ErrFile to fd 2...
	I0729 12:07:48.897712   76627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:07:48.898207   76627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 12:07:48.898867   76627 out.go:298] Setting JSON to false
	I0729 12:07:48.899869   76627 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6615,"bootTime":1722248254,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:07:48.899923   76627 start.go:139] virtualization: kvm guest
	I0729 12:07:48.902347   76627 out.go:177] * [newest-cni-485099] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:07:48.903758   76627 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 12:07:48.903811   76627 notify.go:220] Checking for updates...
	I0729 12:07:48.906399   76627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:07:48.907643   76627 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 12:07:48.908909   76627 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 12:07:48.910068   76627 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:07:48.911380   76627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:07:48.913367   76627 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:07:48.913513   76627 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:07:48.913652   76627 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:07:48.913760   76627 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:07:48.951256   76627 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 12:07:48.952532   76627 start.go:297] selected driver: kvm2
	I0729 12:07:48.952544   76627 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:07:48.952554   76627 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:07:48.953237   76627 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:07:48.953311   76627 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:07:48.969569   76627 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:07:48.969624   76627 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 12:07:48.969661   76627 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 12:07:48.969955   76627 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 12:07:48.970028   76627 cni.go:84] Creating CNI manager for ""
	I0729 12:07:48.970045   76627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:07:48.970060   76627 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:07:48.970160   76627 start.go:340] cluster config:
	{Name:newest-cni-485099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-485099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:07:48.970308   76627 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:07:48.972663   76627 out.go:177] * Starting "newest-cni-485099" primary control-plane node in "newest-cni-485099" cluster
	I0729 12:07:48.974248   76627 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 12:07:48.974311   76627 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 12:07:48.974322   76627 cache.go:56] Caching tarball of preloaded images
	I0729 12:07:48.974423   76627 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:07:48.974433   76627 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 12:07:48.974547   76627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/config.json ...
	I0729 12:07:48.974566   76627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/config.json: {Name:mk8d465c8218242f01e0e066530ffd0f46f13d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:07:48.974724   76627 start.go:360] acquireMachinesLock for newest-cni-485099: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:07:48.974784   76627 start.go:364] duration metric: took 38.045µs to acquireMachinesLock for "newest-cni-485099"
	I0729 12:07:48.974807   76627 start.go:93] Provisioning new machine with config: &{Name:newest-cni-485099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-485099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:07:48.974870   76627 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 12:07:48.976504   76627 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 12:07:48.976652   76627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:07:48.976688   76627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:07:48.991986   76627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
	I0729 12:07:48.992420   76627 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:07:48.993037   76627 main.go:141] libmachine: Using API Version  1
	I0729 12:07:48.993062   76627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:07:48.993392   76627 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:07:48.993589   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetMachineName
	I0729 12:07:48.993761   76627 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:07:48.993904   76627 start.go:159] libmachine.API.Create for "newest-cni-485099" (driver="kvm2")
	I0729 12:07:48.993929   76627 client.go:168] LocalClient.Create starting
	I0729 12:07:48.993959   76627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem
	I0729 12:07:48.993993   76627 main.go:141] libmachine: Decoding PEM data...
	I0729 12:07:48.994008   76627 main.go:141] libmachine: Parsing certificate...
	I0729 12:07:48.994056   76627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem
	I0729 12:07:48.994075   76627 main.go:141] libmachine: Decoding PEM data...
	I0729 12:07:48.994086   76627 main.go:141] libmachine: Parsing certificate...
	I0729 12:07:48.994098   76627 main.go:141] libmachine: Running pre-create checks...
	I0729 12:07:48.994108   76627 main.go:141] libmachine: (newest-cni-485099) Calling .PreCreateCheck
	I0729 12:07:48.994421   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetConfigRaw
	I0729 12:07:48.994819   76627 main.go:141] libmachine: Creating machine...
	I0729 12:07:48.994832   76627 main.go:141] libmachine: (newest-cni-485099) Calling .Create
	I0729 12:07:48.994957   76627 main.go:141] libmachine: (newest-cni-485099) Creating KVM machine...
	I0729 12:07:48.996366   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found existing default KVM network
	I0729 12:07:48.997665   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:48.997536   76650 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:26:50:55} reservation:<nil>}
	I0729 12:07:48.998567   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:48.998491   76650 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2f:fb:f4} reservation:<nil>}
	I0729 12:07:48.999352   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:48.999264   76650 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:29:59:3f} reservation:<nil>}
	I0729 12:07:49.000373   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:49.000256   76650 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289890}
	I0729 12:07:49.000397   76627 main.go:141] libmachine: (newest-cni-485099) DBG | created network xml: 
	I0729 12:07:49.000410   76627 main.go:141] libmachine: (newest-cni-485099) DBG | <network>
	I0729 12:07:49.000424   76627 main.go:141] libmachine: (newest-cni-485099) DBG |   <name>mk-newest-cni-485099</name>
	I0729 12:07:49.000437   76627 main.go:141] libmachine: (newest-cni-485099) DBG |   <dns enable='no'/>
	I0729 12:07:49.000447   76627 main.go:141] libmachine: (newest-cni-485099) DBG |   
	I0729 12:07:49.000457   76627 main.go:141] libmachine: (newest-cni-485099) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0729 12:07:49.000467   76627 main.go:141] libmachine: (newest-cni-485099) DBG |     <dhcp>
	I0729 12:07:49.000475   76627 main.go:141] libmachine: (newest-cni-485099) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0729 12:07:49.000484   76627 main.go:141] libmachine: (newest-cni-485099) DBG |     </dhcp>
	I0729 12:07:49.000496   76627 main.go:141] libmachine: (newest-cni-485099) DBG |   </ip>
	I0729 12:07:49.000508   76627 main.go:141] libmachine: (newest-cni-485099) DBG |   
	I0729 12:07:49.000517   76627 main.go:141] libmachine: (newest-cni-485099) DBG | </network>
	I0729 12:07:49.000524   76627 main.go:141] libmachine: (newest-cni-485099) DBG | 
	I0729 12:07:49.006066   76627 main.go:141] libmachine: (newest-cni-485099) DBG | trying to create private KVM network mk-newest-cni-485099 192.168.72.0/24...
	I0729 12:07:49.079360   76627 main.go:141] libmachine: (newest-cni-485099) DBG | private KVM network mk-newest-cni-485099 192.168.72.0/24 created
	I0729 12:07:49.079445   76627 main.go:141] libmachine: (newest-cni-485099) Setting up store path in /home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099 ...
	I0729 12:07:49.079466   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:49.079228   76650 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 12:07:49.079489   76627 main.go:141] libmachine: (newest-cni-485099) Building disk image from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 12:07:49.079577   76627 main.go:141] libmachine: (newest-cni-485099) Downloading /home/jenkins/minikube-integration/19337-3845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 12:07:49.316815   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:49.316701   76650 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa...
	I0729 12:07:49.392141   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:49.392004   76650 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/newest-cni-485099.rawdisk...
	I0729 12:07:49.392179   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Writing magic tar header
	I0729 12:07:49.392221   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Writing SSH key tar header
	I0729 12:07:49.392260   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:49.392168   76650 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099 ...
	I0729 12:07:49.392300   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099
	I0729 12:07:49.392330   76627 main.go:141] libmachine: (newest-cni-485099) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099 (perms=drwx------)
	I0729 12:07:49.392343   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube/machines
	I0729 12:07:49.392364   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 12:07:49.392378   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19337-3845
	I0729 12:07:49.392395   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 12:07:49.392408   76627 main.go:141] libmachine: (newest-cni-485099) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube/machines (perms=drwxr-xr-x)
	I0729 12:07:49.392418   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Checking permissions on dir: /home/jenkins
	I0729 12:07:49.392432   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Checking permissions on dir: /home
	I0729 12:07:49.392443   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Skipping /home - not owner
	I0729 12:07:49.392457   76627 main.go:141] libmachine: (newest-cni-485099) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845/.minikube (perms=drwxr-xr-x)
	I0729 12:07:49.392480   76627 main.go:141] libmachine: (newest-cni-485099) Setting executable bit set on /home/jenkins/minikube-integration/19337-3845 (perms=drwxrwxr-x)
	I0729 12:07:49.392491   76627 main.go:141] libmachine: (newest-cni-485099) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 12:07:49.392498   76627 main.go:141] libmachine: (newest-cni-485099) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 12:07:49.392508   76627 main.go:141] libmachine: (newest-cni-485099) Creating domain...
	I0729 12:07:49.393644   76627 main.go:141] libmachine: (newest-cni-485099) define libvirt domain using xml: 
	I0729 12:07:49.393668   76627 main.go:141] libmachine: (newest-cni-485099) <domain type='kvm'>
	I0729 12:07:49.393693   76627 main.go:141] libmachine: (newest-cni-485099)   <name>newest-cni-485099</name>
	I0729 12:07:49.393701   76627 main.go:141] libmachine: (newest-cni-485099)   <memory unit='MiB'>2200</memory>
	I0729 12:07:49.393709   76627 main.go:141] libmachine: (newest-cni-485099)   <vcpu>2</vcpu>
	I0729 12:07:49.393715   76627 main.go:141] libmachine: (newest-cni-485099)   <features>
	I0729 12:07:49.393724   76627 main.go:141] libmachine: (newest-cni-485099)     <acpi/>
	I0729 12:07:49.393728   76627 main.go:141] libmachine: (newest-cni-485099)     <apic/>
	I0729 12:07:49.393733   76627 main.go:141] libmachine: (newest-cni-485099)     <pae/>
	I0729 12:07:49.393737   76627 main.go:141] libmachine: (newest-cni-485099)     
	I0729 12:07:49.393742   76627 main.go:141] libmachine: (newest-cni-485099)   </features>
	I0729 12:07:49.393750   76627 main.go:141] libmachine: (newest-cni-485099)   <cpu mode='host-passthrough'>
	I0729 12:07:49.393755   76627 main.go:141] libmachine: (newest-cni-485099)   
	I0729 12:07:49.393759   76627 main.go:141] libmachine: (newest-cni-485099)   </cpu>
	I0729 12:07:49.393763   76627 main.go:141] libmachine: (newest-cni-485099)   <os>
	I0729 12:07:49.393768   76627 main.go:141] libmachine: (newest-cni-485099)     <type>hvm</type>
	I0729 12:07:49.393773   76627 main.go:141] libmachine: (newest-cni-485099)     <boot dev='cdrom'/>
	I0729 12:07:49.393777   76627 main.go:141] libmachine: (newest-cni-485099)     <boot dev='hd'/>
	I0729 12:07:49.393782   76627 main.go:141] libmachine: (newest-cni-485099)     <bootmenu enable='no'/>
	I0729 12:07:49.393786   76627 main.go:141] libmachine: (newest-cni-485099)   </os>
	I0729 12:07:49.393790   76627 main.go:141] libmachine: (newest-cni-485099)   <devices>
	I0729 12:07:49.393795   76627 main.go:141] libmachine: (newest-cni-485099)     <disk type='file' device='cdrom'>
	I0729 12:07:49.393803   76627 main.go:141] libmachine: (newest-cni-485099)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/boot2docker.iso'/>
	I0729 12:07:49.393809   76627 main.go:141] libmachine: (newest-cni-485099)       <target dev='hdc' bus='scsi'/>
	I0729 12:07:49.393814   76627 main.go:141] libmachine: (newest-cni-485099)       <readonly/>
	I0729 12:07:49.393821   76627 main.go:141] libmachine: (newest-cni-485099)     </disk>
	I0729 12:07:49.393830   76627 main.go:141] libmachine: (newest-cni-485099)     <disk type='file' device='disk'>
	I0729 12:07:49.393838   76627 main.go:141] libmachine: (newest-cni-485099)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 12:07:49.393850   76627 main.go:141] libmachine: (newest-cni-485099)       <source file='/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/newest-cni-485099.rawdisk'/>
	I0729 12:07:49.393857   76627 main.go:141] libmachine: (newest-cni-485099)       <target dev='hda' bus='virtio'/>
	I0729 12:07:49.393864   76627 main.go:141] libmachine: (newest-cni-485099)     </disk>
	I0729 12:07:49.393871   76627 main.go:141] libmachine: (newest-cni-485099)     <interface type='network'>
	I0729 12:07:49.393879   76627 main.go:141] libmachine: (newest-cni-485099)       <source network='mk-newest-cni-485099'/>
	I0729 12:07:49.393893   76627 main.go:141] libmachine: (newest-cni-485099)       <model type='virtio'/>
	I0729 12:07:49.393909   76627 main.go:141] libmachine: (newest-cni-485099)     </interface>
	I0729 12:07:49.393921   76627 main.go:141] libmachine: (newest-cni-485099)     <interface type='network'>
	I0729 12:07:49.393932   76627 main.go:141] libmachine: (newest-cni-485099)       <source network='default'/>
	I0729 12:07:49.393937   76627 main.go:141] libmachine: (newest-cni-485099)       <model type='virtio'/>
	I0729 12:07:49.393942   76627 main.go:141] libmachine: (newest-cni-485099)     </interface>
	I0729 12:07:49.393947   76627 main.go:141] libmachine: (newest-cni-485099)     <serial type='pty'>
	I0729 12:07:49.393959   76627 main.go:141] libmachine: (newest-cni-485099)       <target port='0'/>
	I0729 12:07:49.393967   76627 main.go:141] libmachine: (newest-cni-485099)     </serial>
	I0729 12:07:49.393972   76627 main.go:141] libmachine: (newest-cni-485099)     <console type='pty'>
	I0729 12:07:49.393980   76627 main.go:141] libmachine: (newest-cni-485099)       <target type='serial' port='0'/>
	I0729 12:07:49.393988   76627 main.go:141] libmachine: (newest-cni-485099)     </console>
	I0729 12:07:49.393992   76627 main.go:141] libmachine: (newest-cni-485099)     <rng model='virtio'>
	I0729 12:07:49.394001   76627 main.go:141] libmachine: (newest-cni-485099)       <backend model='random'>/dev/random</backend>
	I0729 12:07:49.394006   76627 main.go:141] libmachine: (newest-cni-485099)     </rng>
	I0729 12:07:49.394011   76627 main.go:141] libmachine: (newest-cni-485099)     
	I0729 12:07:49.394017   76627 main.go:141] libmachine: (newest-cni-485099)     
	I0729 12:07:49.394022   76627 main.go:141] libmachine: (newest-cni-485099)   </devices>
	I0729 12:07:49.394029   76627 main.go:141] libmachine: (newest-cni-485099) </domain>
	I0729 12:07:49.394036   76627 main.go:141] libmachine: (newest-cni-485099) 
	I0729 12:07:49.398497   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:2c:49:b3 in network default
	I0729 12:07:49.399134   76627 main.go:141] libmachine: (newest-cni-485099) Ensuring networks are active...
	I0729 12:07:49.399161   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:49.399754   76627 main.go:141] libmachine: (newest-cni-485099) Ensuring network default is active
	I0729 12:07:49.400003   76627 main.go:141] libmachine: (newest-cni-485099) Ensuring network mk-newest-cni-485099 is active
	I0729 12:07:49.400573   76627 main.go:141] libmachine: (newest-cni-485099) Getting domain xml...
	I0729 12:07:49.401348   76627 main.go:141] libmachine: (newest-cni-485099) Creating domain...
	I0729 12:07:50.654875   76627 main.go:141] libmachine: (newest-cni-485099) Waiting to get IP...
	I0729 12:07:50.655666   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:50.656104   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:50.656148   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:50.656088   76650 retry.go:31] will retry after 231.820029ms: waiting for machine to come up
	I0729 12:07:50.889488   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:50.889958   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:50.889980   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:50.889923   76650 retry.go:31] will retry after 359.956908ms: waiting for machine to come up
	I0729 12:07:51.251394   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:51.252024   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:51.252050   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:51.251981   76650 retry.go:31] will retry after 417.626161ms: waiting for machine to come up
	I0729 12:07:51.671554   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:51.671965   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:51.671992   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:51.671909   76650 retry.go:31] will retry after 497.621176ms: waiting for machine to come up
	I0729 12:07:52.171608   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:52.172079   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:52.172110   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:52.172035   76650 retry.go:31] will retry after 550.735851ms: waiting for machine to come up
	I0729 12:07:52.725165   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:52.725595   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:52.725620   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:52.725562   76650 retry.go:31] will retry after 795.759591ms: waiting for machine to come up
	I0729 12:07:53.522435   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:53.522832   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:53.522859   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:53.522787   76650 retry.go:31] will retry after 907.511323ms: waiting for machine to come up
	I0729 12:07:54.432337   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:54.432964   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:54.432989   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:54.432893   76650 retry.go:31] will retry after 1.266449261s: waiting for machine to come up
	I0729 12:07:55.701380   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:55.701818   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:55.701845   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:55.701785   76650 retry.go:31] will retry after 1.860748834s: waiting for machine to come up
	I0729 12:07:57.564201   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:57.564673   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:57.564694   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:57.564621   76650 retry.go:31] will retry after 2.029091392s: waiting for machine to come up
	I0729 12:07:59.595577   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:07:59.596121   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:07:59.596181   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:07:59.596063   76650 retry.go:31] will retry after 2.499693771s: waiting for machine to come up
	I0729 12:08:02.098641   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:02.099181   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:02.099207   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:02.099143   76650 retry.go:31] will retry after 3.204130701s: waiting for machine to come up
	I0729 12:08:05.304625   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:05.305024   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:05.305045   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:05.305006   76650 retry.go:31] will retry after 3.291510846s: waiting for machine to come up
	I0729 12:08:08.599809   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:08.600192   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find current IP address of domain newest-cni-485099 in network mk-newest-cni-485099
	I0729 12:08:08.600221   76627 main.go:141] libmachine: (newest-cni-485099) DBG | I0729 12:08:08.600153   76650 retry.go:31] will retry after 3.69942356s: waiting for machine to come up
	I0729 12:08:12.303769   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.304219   76627 main.go:141] libmachine: (newest-cni-485099) Found IP for machine: 192.168.72.213
	I0729 12:08:12.304244   76627 main.go:141] libmachine: (newest-cni-485099) Reserving static IP address...
	I0729 12:08:12.304258   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has current primary IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.304750   76627 main.go:141] libmachine: (newest-cni-485099) DBG | unable to find host DHCP lease matching {name: "newest-cni-485099", mac: "52:54:00:82:f5:00", ip: "192.168.72.213"} in network mk-newest-cni-485099
	I0729 12:08:12.384559   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Getting to WaitForSSH function...
	I0729 12:08:12.384588   76627 main.go:141] libmachine: (newest-cni-485099) Reserved static IP address: 192.168.72.213
	I0729 12:08:12.384599   76627 main.go:141] libmachine: (newest-cni-485099) Waiting for SSH to be available...
	I0729 12:08:12.387383   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.387781   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:minikube Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:12.387812   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.387993   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Using SSH client type: external
	I0729 12:08:12.388013   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa (-rw-------)
	I0729 12:08:12.388041   76627 main.go:141] libmachine: (newest-cni-485099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 12:08:12.388050   76627 main.go:141] libmachine: (newest-cni-485099) DBG | About to run SSH command:
	I0729 12:08:12.388064   76627 main.go:141] libmachine: (newest-cni-485099) DBG | exit 0
	I0729 12:08:12.511328   76627 main.go:141] libmachine: (newest-cni-485099) DBG | SSH cmd err, output: <nil>: 
	I0729 12:08:12.511598   76627 main.go:141] libmachine: (newest-cni-485099) KVM machine creation complete!
	I0729 12:08:12.511940   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetConfigRaw
	I0729 12:08:12.512557   76627 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:12.512746   76627 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:12.512929   76627 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 12:08:12.512946   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetState
	I0729 12:08:12.514307   76627 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 12:08:12.514326   76627 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 12:08:12.514333   76627 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 12:08:12.514340   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:12.517199   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.517599   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:12.517626   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.517954   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:08:12.518161   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:12.518332   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:12.518471   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:08:12.518649   76627 main.go:141] libmachine: Using SSH client type: native
	I0729 12:08:12.518891   76627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:08:12.518905   76627 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 12:08:12.618258   76627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:08:12.618299   76627 main.go:141] libmachine: Detecting the provisioner...
	I0729 12:08:12.618318   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:12.621511   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.621842   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:12.621880   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.622118   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:08:12.622333   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:12.622514   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:12.622644   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:08:12.622854   76627 main.go:141] libmachine: Using SSH client type: native
	I0729 12:08:12.623077   76627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:08:12.623093   76627 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 12:08:12.724058   76627 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 12:08:12.724150   76627 main.go:141] libmachine: found compatible host: buildroot
	I0729 12:08:12.724162   76627 main.go:141] libmachine: Provisioning with buildroot...
	I0729 12:08:12.724175   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetMachineName
	I0729 12:08:12.724408   76627 buildroot.go:166] provisioning hostname "newest-cni-485099"
	I0729 12:08:12.724429   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetMachineName
	I0729 12:08:12.724624   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:12.727564   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.728067   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:12.728098   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.728264   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:08:12.728436   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:12.728605   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:12.728766   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:08:12.728937   76627 main.go:141] libmachine: Using SSH client type: native
	I0729 12:08:12.729097   76627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:08:12.729109   76627 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-485099 && echo "newest-cni-485099" | sudo tee /etc/hostname
	I0729 12:08:12.849920   76627 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-485099
	
	I0729 12:08:12.849959   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:12.852946   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.853292   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:12.853321   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.853493   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:08:12.853684   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:12.853873   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:12.854048   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:08:12.854228   76627 main.go:141] libmachine: Using SSH client type: native
	I0729 12:08:12.854444   76627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:08:12.854495   76627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-485099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-485099/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-485099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:08:12.966620   76627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:08:12.966648   76627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 12:08:12.966678   76627 buildroot.go:174] setting up certificates
	I0729 12:08:12.966689   76627 provision.go:84] configureAuth start
	I0729 12:08:12.966715   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetMachineName
	I0729 12:08:12.967009   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetIP
	I0729 12:08:12.969913   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.970236   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:12.970266   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.970462   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:12.972894   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.973222   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:12.973250   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:12.973486   76627 provision.go:143] copyHostCerts
	I0729 12:08:12.973538   76627 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 12:08:12.973547   76627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 12:08:12.973612   76627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 12:08:12.973722   76627 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 12:08:12.973733   76627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 12:08:12.973766   76627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 12:08:12.973836   76627 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 12:08:12.973844   76627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 12:08:12.973863   76627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 12:08:12.973930   76627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.newest-cni-485099 san=[127.0.0.1 192.168.72.213 localhost minikube newest-cni-485099]
	I0729 12:08:13.057049   76627 provision.go:177] copyRemoteCerts
	I0729 12:08:13.057103   76627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:08:13.057125   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:13.060209   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.060639   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:13.060671   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.060860   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:08:13.061064   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:13.061301   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:08:13.061470   76627 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:08:13.141589   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 12:08:13.170137   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 12:08:13.196963   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:08:13.222610   76627 provision.go:87] duration metric: took 255.90855ms to configureAuth
	I0729 12:08:13.222643   76627 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:08:13.222909   76627 config.go:182] Loaded profile config "newest-cni-485099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:08:13.223000   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:13.225873   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.226322   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:13.226347   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.226626   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:08:13.226858   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:13.227042   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:13.227197   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:08:13.227363   76627 main.go:141] libmachine: Using SSH client type: native
	I0729 12:08:13.227538   76627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:08:13.227553   76627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:08:13.503807   76627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:08:13.503851   76627 main.go:141] libmachine: Checking connection to Docker...
	I0729 12:08:13.503862   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetURL
	I0729 12:08:13.505209   76627 main.go:141] libmachine: (newest-cni-485099) DBG | Using libvirt version 6000000
	I0729 12:08:13.507676   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.508066   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:13.508094   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.508223   76627 main.go:141] libmachine: Docker is up and running!
	I0729 12:08:13.508237   76627 main.go:141] libmachine: Reticulating splines...
	I0729 12:08:13.508246   76627 client.go:171] duration metric: took 24.514309608s to LocalClient.Create
	I0729 12:08:13.508274   76627 start.go:167] duration metric: took 24.514369412s to libmachine.API.Create "newest-cni-485099"
	I0729 12:08:13.508285   76627 start.go:293] postStartSetup for "newest-cni-485099" (driver="kvm2")
	I0729 12:08:13.508300   76627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:08:13.508319   76627 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:13.508568   76627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:08:13.508589   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:13.510962   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.511288   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:13.511314   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.511433   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:08:13.511640   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:13.511809   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:08:13.511987   76627 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:08:13.594871   76627 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:08:13.599488   76627 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:08:13.599517   76627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 12:08:13.599581   76627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 12:08:13.599655   76627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 12:08:13.599776   76627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:08:13.609586   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 12:08:13.635332   76627 start.go:296] duration metric: took 127.033476ms for postStartSetup
	I0729 12:08:13.635381   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetConfigRaw
	I0729 12:08:13.635976   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetIP
	I0729 12:08:13.638321   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.638717   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:13.638762   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.638985   76627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/config.json ...
	I0729 12:08:13.639214   76627 start.go:128] duration metric: took 24.664333833s to createHost
	I0729 12:08:13.639238   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:13.641282   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.641624   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:13.641653   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.641817   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:08:13.642011   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:13.642155   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:13.642390   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:08:13.642573   76627 main.go:141] libmachine: Using SSH client type: native
	I0729 12:08:13.642792   76627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0729 12:08:13.642807   76627 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:08:13.743625   76627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722254893.711317171
	
	I0729 12:08:13.743651   76627 fix.go:216] guest clock: 1722254893.711317171
	I0729 12:08:13.743660   76627 fix.go:229] Guest: 2024-07-29 12:08:13.711317171 +0000 UTC Remote: 2024-07-29 12:08:13.639227149 +0000 UTC m=+24.777681518 (delta=72.090022ms)
	I0729 12:08:13.743683   76627 fix.go:200] guest clock delta is within tolerance: 72.090022ms
	I0729 12:08:13.743689   76627 start.go:83] releasing machines lock for "newest-cni-485099", held for 24.768893591s
	I0729 12:08:13.743706   76627 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:13.743985   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetIP
	I0729 12:08:13.746788   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.747139   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:13.747192   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.747307   76627 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:13.748020   76627 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:13.748236   76627 main.go:141] libmachine: (newest-cni-485099) Calling .DriverName
	I0729 12:08:13.748359   76627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:08:13.748398   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:13.748488   76627 ssh_runner.go:195] Run: cat /version.json
	I0729 12:08:13.748511   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHHostname
	I0729 12:08:13.751630   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.751974   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.752077   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:13.752105   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.752319   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:08:13.752514   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:13.752536   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:13.752571   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:13.752700   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:08:13.752718   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHPort
	I0729 12:08:13.752881   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHKeyPath
	I0729 12:08:13.752923   76627 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:08:13.752996   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetSSHUsername
	I0729 12:08:13.753158   76627 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/newest-cni-485099/id_rsa Username:docker}
	I0729 12:08:13.855349   76627 ssh_runner.go:195] Run: systemctl --version
	I0729 12:08:13.862294   76627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:08:14.028060   76627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:08:14.034538   76627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:08:14.034604   76627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:08:14.052506   76627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 12:08:14.052542   76627 start.go:495] detecting cgroup driver to use...
	I0729 12:08:14.052611   76627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:08:14.071537   76627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:08:14.085850   76627 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:08:14.085902   76627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:08:14.100174   76627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:08:14.114969   76627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:08:14.241198   76627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:08:14.397184   76627 docker.go:233] disabling docker service ...
	I0729 12:08:14.397257   76627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:08:14.413347   76627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:08:14.427036   76627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:08:14.569794   76627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:08:14.689653   76627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:08:14.705496   76627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:08:14.725590   76627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 12:08:14.725660   76627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:08:14.736650   76627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:08:14.736717   76627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:08:14.748684   76627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:08:14.760519   76627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:08:14.771965   76627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:08:14.783677   76627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:08:14.795384   76627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:08:14.814732   76627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:08:14.826388   76627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:08:14.836313   76627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 12:08:14.836391   76627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 12:08:14.850627   76627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:08:14.861693   76627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:08:14.984812   76627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:08:15.133934   76627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:08:15.134005   76627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:08:15.139764   76627 start.go:563] Will wait 60s for crictl version
	I0729 12:08:15.139828   76627 ssh_runner.go:195] Run: which crictl
	I0729 12:08:15.143738   76627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:08:15.190120   76627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:08:15.190197   76627 ssh_runner.go:195] Run: crio --version
	I0729 12:08:15.220731   76627 ssh_runner.go:195] Run: crio --version
	I0729 12:08:15.256468   76627 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 12:08:15.258161   76627 main.go:141] libmachine: (newest-cni-485099) Calling .GetIP
	I0729 12:08:15.261193   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:15.261573   76627 main.go:141] libmachine: (newest-cni-485099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:f5:00", ip: ""} in network mk-newest-cni-485099: {Iface:virbr4 ExpiryTime:2024-07-29 13:08:03 +0000 UTC Type:0 Mac:52:54:00:82:f5:00 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-485099 Clientid:01:52:54:00:82:f5:00}
	I0729 12:08:15.261608   76627 main.go:141] libmachine: (newest-cni-485099) DBG | domain newest-cni-485099 has defined IP address 192.168.72.213 and MAC address 52:54:00:82:f5:00 in network mk-newest-cni-485099
	I0729 12:08:15.261952   76627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 12:08:15.266824   76627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:08:15.284068   76627 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0729 12:08:15.285823   76627 kubeadm.go:883] updating cluster {Name:newest-cni-485099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-485099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:08:15.285983   76627 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 12:08:15.286059   76627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:08:15.325012   76627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 12:08:15.325076   76627 ssh_runner.go:195] Run: which lz4
	I0729 12:08:15.329203   76627 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 12:08:15.333559   76627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 12:08:15.333598   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0729 12:08:16.780931   76627 crio.go:462] duration metric: took 1.451751146s to copy over tarball
	I0729 12:08:16.781035   76627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 12:08:18.839334   76627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.058272121s)
	I0729 12:08:18.839364   76627 crio.go:469] duration metric: took 2.058404591s to extract the tarball
	I0729 12:08:18.839375   76627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 12:08:18.878884   76627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:08:18.925869   76627 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:08:18.925891   76627 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:08:18.925898   76627 kubeadm.go:934] updating node { 192.168.72.213 8443 v1.31.0-beta.0 crio true true} ...
	I0729 12:08:18.925998   76627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-485099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-485099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:08:18.926077   76627 ssh_runner.go:195] Run: crio config
	I0729 12:08:18.973776   76627 cni.go:84] Creating CNI manager for ""
	I0729 12:08:18.973810   76627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:08:18.973829   76627 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0729 12:08:18.973861   76627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.213 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-485099 NodeName:newest-cni-485099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.72.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:08:18.974077   76627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-485099"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:08:18.974181   76627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 12:08:18.985764   76627 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:08:18.985832   76627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 12:08:18.995440   76627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0729 12:08:19.013730   76627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 12:08:19.030681   76627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0729 12:08:19.049742   76627 ssh_runner.go:195] Run: grep 192.168.72.213	control-plane.minikube.internal$ /etc/hosts
	I0729 12:08:19.054123   76627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:08:19.066541   76627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:08:19.195697   76627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:08:19.213531   76627 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099 for IP: 192.168.72.213
	I0729 12:08:19.213559   76627 certs.go:194] generating shared ca certs ...
	I0729 12:08:19.213578   76627 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:08:19.213786   76627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 12:08:19.213838   76627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 12:08:19.213853   76627 certs.go:256] generating profile certs ...
	I0729 12:08:19.213922   76627 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/client.key
	I0729 12:08:19.213940   76627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/client.crt with IP's: []
	I0729 12:08:19.345452   76627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/client.crt ...
	I0729 12:08:19.345488   76627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/client.crt: {Name:mk1ca1db4961e28fd1c23c0127b851cfa4724814 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:08:19.345679   76627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/client.key ...
	I0729 12:08:19.345691   76627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/client.key: {Name:mk85e60df0a76d8bb3a5349ad1ee435f50f85683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:08:19.345786   76627 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.key.022f6aa0
	I0729 12:08:19.345802   76627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.crt.022f6aa0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.213]
	I0729 12:08:19.664309   76627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.crt.022f6aa0 ...
	I0729 12:08:19.664340   76627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.crt.022f6aa0: {Name:mkdfb78e5593b5030cee3a70ef4e38ed7c177898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:08:19.664513   76627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.key.022f6aa0 ...
	I0729 12:08:19.664528   76627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.key.022f6aa0: {Name:mkd745a3c600c5e14575b8ec28efd3fd86c9bf0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:08:19.664605   76627 certs.go:381] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.crt.022f6aa0 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.crt
	I0729 12:08:19.664710   76627 certs.go:385] copying /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.key.022f6aa0 -> /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.key
	I0729 12:08:19.664771   76627 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.key
	I0729 12:08:19.664796   76627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.crt with IP's: []
	I0729 12:08:20.006248   76627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.crt ...
	I0729 12:08:20.006281   76627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.crt: {Name:mk268df880097eb92d2b7f086709a50cd0e46a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:08:20.006483   76627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.key ...
	I0729 12:08:20.006503   76627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.key: {Name:mk3dc6028f4fc29f048cc60c7ee232044db73a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:08:20.006723   76627 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 12:08:20.006768   76627 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 12:08:20.006787   76627 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 12:08:20.006813   76627 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 12:08:20.006855   76627 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:08:20.006880   76627 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 12:08:20.006932   76627 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 12:08:20.007559   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:08:20.040196   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 12:08:20.078978   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:08:20.110568   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 12:08:20.137155   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 12:08:20.164153   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:08:20.194950   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:08:20.221890   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/newest-cni-485099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:08:20.247848   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:08:20.275301   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 12:08:20.301669   76627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 12:08:20.330011   76627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:08:20.348619   76627 ssh_runner.go:195] Run: openssl version
	I0729 12:08:20.354588   76627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:08:20.366085   76627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:08:20.371149   76627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:08:20.371211   76627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:08:20.378142   76627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:08:20.391193   76627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 12:08:20.403829   76627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 12:08:20.408610   76627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 12:08:20.408674   76627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 12:08:20.414589   76627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 12:08:20.426262   76627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 12:08:20.437548   76627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 12:08:20.442352   76627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 12:08:20.442419   76627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 12:08:20.448647   76627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:08:20.460265   76627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:08:20.464599   76627 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 12:08:20.464678   76627 kubeadm.go:392] StartCluster: {Name:newest-cni-485099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-485099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:08:20.464771   76627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:08:20.464909   76627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:08:20.504827   76627 cri.go:89] found id: ""
	I0729 12:08:20.504910   76627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 12:08:20.515214   76627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 12:08:20.524814   76627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 12:08:20.534933   76627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 12:08:20.534954   76627 kubeadm.go:157] found existing configuration files:
	
	I0729 12:08:20.535003   76627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 12:08:20.544208   76627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 12:08:20.544277   76627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 12:08:20.553754   76627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 12:08:20.562551   76627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 12:08:20.562603   76627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 12:08:20.572172   76627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 12:08:20.582246   76627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 12:08:20.582319   76627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 12:08:20.593377   76627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 12:08:20.603511   76627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 12:08:20.603568   76627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 12:08:20.613715   76627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 12:08:20.740354   76627 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 12:08:20.740499   76627 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 12:08:20.874438   76627 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 12:08:20.874571   76627 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 12:08:20.874746   76627 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 12:08:20.884690   76627 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 12:08:21.058937   76627 out.go:204]   - Generating certificates and keys ...
	I0729 12:08:21.059055   76627 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 12:08:21.059127   76627 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 12:08:21.151719   76627 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 12:08:21.378045   76627 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 12:08:21.492113   76627 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 12:08:21.682953   76627 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 12:08:21.901863   76627 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 12:08:21.902173   76627 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-485099] and IPs [192.168.72.213 127.0.0.1 ::1]
	I0729 12:08:22.273066   76627 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 12:08:22.273453   76627 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-485099] and IPs [192.168.72.213 127.0.0.1 ::1]
	I0729 12:08:22.423753   76627 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 12:08:22.539696   76627 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 12:08:22.585558   76627 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 12:08:22.585856   76627 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 12:08:22.859325   76627 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 12:08:23.099944   76627 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 12:08:23.310043   76627 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 12:08:23.558443   76627 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 12:08:23.748427   76627 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 12:08:23.749015   76627 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 12:08:23.752416   76627 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 12:08:23.754203   76627 out.go:204]   - Booting up control plane ...
	I0729 12:08:23.754305   76627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 12:08:23.754415   76627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 12:08:23.754843   76627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 12:08:23.776489   76627 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 12:08:23.785416   76627 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 12:08:23.785465   76627 kubeadm.go:310] [kubelet-start] Starting the kubelet
	
	
	==> CRI-O <==
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.128847148Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4930438f-afba-4997-981c-abe17e2c74f9 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.130605090Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29e44f75-229d-4ddb-a0b6-6fd9e4e30c02 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.131018087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254906130992231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29e44f75-229d-4ddb-a0b6-6fd9e4e30c02 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.131669172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=336fa49a-962d-432c-8202-86e5d004cea0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.131743246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=336fa49a-962d-432c-8202-86e5d004cea0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.132037557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee36092b45794c1091ae3dcb9c5c38bb131a59b45cde74b042de394332fbee,PodSandboxId:c1451dcfd7f415486db3ba9939b82683a2580d656f1beb23b5c145e828f5198e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254018747255452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afce5e3-3bcf-476d-9846-c57e98532d24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9b00ee071e357b4ca0b9d6ec91567f297a0fe207a61e6538b6eda02418b060,PodSandboxId:4f2629ba00a9ff413e7f662e1a69f65937b4e7e1dfb94567b7a0fa9c8ce509eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254018079880829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bnqrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2258a90-8b49-4cd3-9e84-6e3567ede3f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f49db7287541769a9c5841ac45bbca52be912aa3fd7d5bae1cc9d5d22542ced,PodSandboxId:5111f1fc15e65d39088fa90d640915618c1a226c55a4358f4a104db5ec2be201,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254017600495493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7n6s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
8e4916-ee1d-47ce-902b-7c6328514ca9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47520a7ce939c00172d33ae5b6db2c81fbb7301ec269f327baaf9097a15d8b5,PodSandboxId:535ebdf1385bd2aa287073fdc22b5feb291333ff875058695251177a6a81c13a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722254017113574428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blx4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892d6ac2-66bd-4af0-9bca-7018e1d51c1b,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1520e4956aff08169cf1bf92e39d8bb7cf1776327f167b35c5aa180b1955ebea,PodSandboxId:631fe808ce547fcbba71bff39a599eab957a72293394190efa0571fd5fd23018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722254006000392888,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9849f4439601378b9024a3ea6af8212ca083ca359a0ba4e40a5f2b18a3d8179,PodSandboxId:8ab8865d8537fd1f1bd5e966229bbe99b029b8c4b673e357a49c19adb012c1aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722254006017690656,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21543e2f80b25c052d45594e6ac1871b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace2035e6f2a6d7753c6b1c97aa6f8e5cdd7ae57bd452cb7a025bac96d958763,PodSandboxId:e03ca293b0943ddff9ec42373e53b7dfc32976b0f10ce231fe7c9bb077bbcc15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722254005962907639,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c956e5ee1dc919e60090f2d6a4e35a7c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b405cd58267974debe4f0092d98b65ec7ee23d139828e582e2d05c34185dc81,PodSandboxId:8d2b1da13933da60421943cb10e1ae777d5594ceccf0f9d5bb19e8813c565a44,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722254005907468938,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068113bfdd7d96276ec1d1442aa31b21,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e605ca417408e170cea9d80ff78a3aa5e15c87e2e8fb47235fed33945f24a30,PodSandboxId:2993c2108d0ae0d24f39f8cd80087b3420f6c5336b1994c15da59e6e26bbbd93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722253722315783186,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=336fa49a-962d-432c-8202-86e5d004cea0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.176696081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=849152bd-394d-48cf-b6fd-c4e711327b2e name=/runtime.v1.RuntimeService/Version
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.176795309Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=849152bd-394d-48cf-b6fd-c4e711327b2e name=/runtime.v1.RuntimeService/Version
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.178025666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fda5b923-56fe-4443-832e-6e4622e6829c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.178595122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254906178563317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fda5b923-56fe-4443-832e-6e4622e6829c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.180927865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c84a1384-a7eb-481b-8356-4972008da103 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.181168552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c84a1384-a7eb-481b-8356-4972008da103 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.181657091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee36092b45794c1091ae3dcb9c5c38bb131a59b45cde74b042de394332fbee,PodSandboxId:c1451dcfd7f415486db3ba9939b82683a2580d656f1beb23b5c145e828f5198e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254018747255452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afce5e3-3bcf-476d-9846-c57e98532d24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9b00ee071e357b4ca0b9d6ec91567f297a0fe207a61e6538b6eda02418b060,PodSandboxId:4f2629ba00a9ff413e7f662e1a69f65937b4e7e1dfb94567b7a0fa9c8ce509eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254018079880829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bnqrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2258a90-8b49-4cd3-9e84-6e3567ede3f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f49db7287541769a9c5841ac45bbca52be912aa3fd7d5bae1cc9d5d22542ced,PodSandboxId:5111f1fc15e65d39088fa90d640915618c1a226c55a4358f4a104db5ec2be201,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254017600495493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7n6s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
8e4916-ee1d-47ce-902b-7c6328514ca9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47520a7ce939c00172d33ae5b6db2c81fbb7301ec269f327baaf9097a15d8b5,PodSandboxId:535ebdf1385bd2aa287073fdc22b5feb291333ff875058695251177a6a81c13a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722254017113574428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blx4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892d6ac2-66bd-4af0-9bca-7018e1d51c1b,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1520e4956aff08169cf1bf92e39d8bb7cf1776327f167b35c5aa180b1955ebea,PodSandboxId:631fe808ce547fcbba71bff39a599eab957a72293394190efa0571fd5fd23018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722254006000392888,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9849f4439601378b9024a3ea6af8212ca083ca359a0ba4e40a5f2b18a3d8179,PodSandboxId:8ab8865d8537fd1f1bd5e966229bbe99b029b8c4b673e357a49c19adb012c1aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722254006017690656,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21543e2f80b25c052d45594e6ac1871b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace2035e6f2a6d7753c6b1c97aa6f8e5cdd7ae57bd452cb7a025bac96d958763,PodSandboxId:e03ca293b0943ddff9ec42373e53b7dfc32976b0f10ce231fe7c9bb077bbcc15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722254005962907639,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c956e5ee1dc919e60090f2d6a4e35a7c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b405cd58267974debe4f0092d98b65ec7ee23d139828e582e2d05c34185dc81,PodSandboxId:8d2b1da13933da60421943cb10e1ae777d5594ceccf0f9d5bb19e8813c565a44,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722254005907468938,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068113bfdd7d96276ec1d1442aa31b21,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e605ca417408e170cea9d80ff78a3aa5e15c87e2e8fb47235fed33945f24a30,PodSandboxId:2993c2108d0ae0d24f39f8cd80087b3420f6c5336b1994c15da59e6e26bbbd93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722253722315783186,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c84a1384-a7eb-481b-8356-4972008da103 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.218547433Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=604e21d2-a7aa-4dae-8e03-8aee30ea676f name=/runtime.v1.RuntimeService/Version
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.218641938Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=604e21d2-a7aa-4dae-8e03-8aee30ea676f name=/runtime.v1.RuntimeService/Version
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.221202964Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37672b5c-a343-4224-8ef8-911579b32635 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.221622584Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254906221599615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37672b5c-a343-4224-8ef8-911579b32635 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.221848636Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d7ad9154-bd21-416e-bd56-cd87742b6460 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.222135458Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0f5081865615e0226332d98f1588ed04b50ce412aaef76b6b32c7f8db20da893,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-vxjvd,Uid:8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722254018807796605,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-vxjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:53:38.498264553Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c1451dcfd7f415486db3ba9939b82683a2580d656f1beb23b5c145e828f5198e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4afce5e3-3bcf-476d-9846-c57e98532d24,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722254018652933556,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afce5e3-3bcf-476d-9846-c57e98532d24,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T11:53:38.341218619Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f2629ba00a9ff413e7f662e1a69f65937b4e7e1dfb94567b7a0fa9c8ce509eb,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-bnqrr,Uid:d2258a90-8b49-4cd3-9e84-6e3567ede3f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722254017172997298,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-bnqrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2258a90-8b49-4cd3-9e84-6e3567ede3f3,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:53:36.561335363Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:535ebdf1385bd2aa287073fdc22b5feb291333ff875058695251177a6a81c13a,Metadata:&PodSandboxMetadata{Name:kube-proxy-blx4g,Uid:892d6ac2-66bd-4af0-9bca-701
8e1d51c1b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722254016911391385,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-blx4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892d6ac2-66bd-4af0-9bca-7018e1d51c1b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:53:36.598050296Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5111f1fc15e65d39088fa90d640915618c1a226c55a4358f4a104db5ec2be201,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-7n6s7,Uid:4e8e4916-ee1d-47ce-902b-7c6328514ca9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722254016890835682,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-7n6s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e8e4916-ee1d-47ce-902b-7c6328514ca9,k8s-app: kube-dns,pod-templat
e-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:53:36.575704931Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:631fe808ce547fcbba71bff39a599eab957a72293394190efa0571fd5fd23018,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-297799,Uid:dda72ebc91ce509da46f390d46284400,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722254005764877078,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.120:8443,kubernetes.io/config.hash: dda72ebc91ce509da46f390d46284400,kubernetes.io/config.seen: 2024-07-29T11:53:25.309726653Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e03ca293b0943ddff9ec42373e53b7d
fc32976b0f10ce231fe7c9bb077bbcc15,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-297799,Uid:c956e5ee1dc919e60090f2d6a4e35a7c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722254005762731023,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c956e5ee1dc919e60090f2d6a4e35a7c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c956e5ee1dc919e60090f2d6a4e35a7c,kubernetes.io/config.seen: 2024-07-29T11:53:25.309727850Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8ab8865d8537fd1f1bd5e966229bbe99b029b8c4b673e357a49c19adb012c1aa,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-297799,Uid:21543e2f80b25c052d45594e6ac1871b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722254005760070156,Labels:map[string]string{component: kube-sch
eduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21543e2f80b25c052d45594e6ac1871b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 21543e2f80b25c052d45594e6ac1871b,kubernetes.io/config.seen: 2024-07-29T11:53:25.309728667Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8d2b1da13933da60421943cb10e1ae777d5594ceccf0f9d5bb19e8813c565a44,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-297799,Uid:068113bfdd7d96276ec1d1442aa31b21,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722254005736319739,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068113bfdd7d96276ec1d1442aa31b21,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.120:237
9,kubernetes.io/config.hash: 068113bfdd7d96276ec1d1442aa31b21,kubernetes.io/config.seen: 2024-07-29T11:53:25.309722656Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2993c2108d0ae0d24f39f8cd80087b3420f6c5336b1994c15da59e6e26bbbd93,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-297799,Uid:dda72ebc91ce509da46f390d46284400,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722253722139027519,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.120:8443,kubernetes.io/config.hash: dda72ebc91ce509da46f390d46284400,kubernetes.io/config.seen: 2024-07-29T11:48:41.632274755Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=d7ad9154-bd21-416e-bd56-cd87742b6460 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.222461826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6802e5a8-32a9-4107-98e5-c752dd1f555c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.222504995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6802e5a8-32a9-4107-98e5-c752dd1f555c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.222713437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee36092b45794c1091ae3dcb9c5c38bb131a59b45cde74b042de394332fbee,PodSandboxId:c1451dcfd7f415486db3ba9939b82683a2580d656f1beb23b5c145e828f5198e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254018747255452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afce5e3-3bcf-476d-9846-c57e98532d24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9b00ee071e357b4ca0b9d6ec91567f297a0fe207a61e6538b6eda02418b060,PodSandboxId:4f2629ba00a9ff413e7f662e1a69f65937b4e7e1dfb94567b7a0fa9c8ce509eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254018079880829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bnqrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2258a90-8b49-4cd3-9e84-6e3567ede3f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f49db7287541769a9c5841ac45bbca52be912aa3fd7d5bae1cc9d5d22542ced,PodSandboxId:5111f1fc15e65d39088fa90d640915618c1a226c55a4358f4a104db5ec2be201,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254017600495493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7n6s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
8e4916-ee1d-47ce-902b-7c6328514ca9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47520a7ce939c00172d33ae5b6db2c81fbb7301ec269f327baaf9097a15d8b5,PodSandboxId:535ebdf1385bd2aa287073fdc22b5feb291333ff875058695251177a6a81c13a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722254017113574428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blx4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892d6ac2-66bd-4af0-9bca-7018e1d51c1b,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1520e4956aff08169cf1bf92e39d8bb7cf1776327f167b35c5aa180b1955ebea,PodSandboxId:631fe808ce547fcbba71bff39a599eab957a72293394190efa0571fd5fd23018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722254006000392888,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9849f4439601378b9024a3ea6af8212ca083ca359a0ba4e40a5f2b18a3d8179,PodSandboxId:8ab8865d8537fd1f1bd5e966229bbe99b029b8c4b673e357a49c19adb012c1aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722254006017690656,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21543e2f80b25c052d45594e6ac1871b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace2035e6f2a6d7753c6b1c97aa6f8e5cdd7ae57bd452cb7a025bac96d958763,PodSandboxId:e03ca293b0943ddff9ec42373e53b7dfc32976b0f10ce231fe7c9bb077bbcc15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722254005962907639,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c956e5ee1dc919e60090f2d6a4e35a7c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b405cd58267974debe4f0092d98b65ec7ee23d139828e582e2d05c34185dc81,PodSandboxId:8d2b1da13933da60421943cb10e1ae777d5594ceccf0f9d5bb19e8813c565a44,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722254005907468938,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068113bfdd7d96276ec1d1442aa31b21,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e605ca417408e170cea9d80ff78a3aa5e15c87e2e8fb47235fed33945f24a30,PodSandboxId:2993c2108d0ae0d24f39f8cd80087b3420f6c5336b1994c15da59e6e26bbbd93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722253722315783186,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6802e5a8-32a9-4107-98e5-c752dd1f555c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.223509945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38eee976-6cf6-4497-a41d-353855afeb64 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.223585105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38eee976-6cf6-4497-a41d-353855afeb64 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:08:26 no-preload-297799 crio[724]: time="2024-07-29 12:08:26.223765449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee36092b45794c1091ae3dcb9c5c38bb131a59b45cde74b042de394332fbee,PodSandboxId:c1451dcfd7f415486db3ba9939b82683a2580d656f1beb23b5c145e828f5198e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254018747255452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afce5e3-3bcf-476d-9846-c57e98532d24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9b00ee071e357b4ca0b9d6ec91567f297a0fe207a61e6538b6eda02418b060,PodSandboxId:4f2629ba00a9ff413e7f662e1a69f65937b4e7e1dfb94567b7a0fa9c8ce509eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254018079880829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bnqrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2258a90-8b49-4cd3-9e84-6e3567ede3f3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f49db7287541769a9c5841ac45bbca52be912aa3fd7d5bae1cc9d5d22542ced,PodSandboxId:5111f1fc15e65d39088fa90d640915618c1a226c55a4358f4a104db5ec2be201,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254017600495493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7n6s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
8e4916-ee1d-47ce-902b-7c6328514ca9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47520a7ce939c00172d33ae5b6db2c81fbb7301ec269f327baaf9097a15d8b5,PodSandboxId:535ebdf1385bd2aa287073fdc22b5feb291333ff875058695251177a6a81c13a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722254017113574428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blx4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892d6ac2-66bd-4af0-9bca-7018e1d51c1b,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1520e4956aff08169cf1bf92e39d8bb7cf1776327f167b35c5aa180b1955ebea,PodSandboxId:631fe808ce547fcbba71bff39a599eab957a72293394190efa0571fd5fd23018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722254006000392888,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9849f4439601378b9024a3ea6af8212ca083ca359a0ba4e40a5f2b18a3d8179,PodSandboxId:8ab8865d8537fd1f1bd5e966229bbe99b029b8c4b673e357a49c19adb012c1aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722254006017690656,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21543e2f80b25c052d45594e6ac1871b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace2035e6f2a6d7753c6b1c97aa6f8e5cdd7ae57bd452cb7a025bac96d958763,PodSandboxId:e03ca293b0943ddff9ec42373e53b7dfc32976b0f10ce231fe7c9bb077bbcc15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722254005962907639,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c956e5ee1dc919e60090f2d6a4e35a7c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b405cd58267974debe4f0092d98b65ec7ee23d139828e582e2d05c34185dc81,PodSandboxId:8d2b1da13933da60421943cb10e1ae777d5594ceccf0f9d5bb19e8813c565a44,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722254005907468938,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068113bfdd7d96276ec1d1442aa31b21,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e605ca417408e170cea9d80ff78a3aa5e15c87e2e8fb47235fed33945f24a30,PodSandboxId:2993c2108d0ae0d24f39f8cd80087b3420f6c5336b1994c15da59e6e26bbbd93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722253722315783186,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-297799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda72ebc91ce509da46f390d46284400,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38eee976-6cf6-4497-a41d-353855afeb64 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10ee36092b457       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   c1451dcfd7f41       storage-provisioner
	9d9b00ee071e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   4f2629ba00a9f       coredns-5cfdc65f69-bnqrr
	1f49db7287541       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   5111f1fc15e65       coredns-5cfdc65f69-7n6s7
	c47520a7ce939       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 minutes ago      Running             kube-proxy                0                   535ebdf1385bd       kube-proxy-blx4g
	b9849f4439601       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   15 minutes ago      Running             kube-scheduler            2                   8ab8865d8537f       kube-scheduler-no-preload-297799
	1520e4956aff0       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   15 minutes ago      Running             kube-apiserver            2                   631fe808ce547       kube-apiserver-no-preload-297799
	ace2035e6f2a6       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   15 minutes ago      Running             kube-controller-manager   2                   e03ca293b0943       kube-controller-manager-no-preload-297799
	7b405cd582679       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   15 minutes ago      Running             etcd                      2                   8d2b1da13933d       etcd-no-preload-297799
	2e605ca417408       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   19 minutes ago      Exited              kube-apiserver            1                   2993c2108d0ae       kube-apiserver-no-preload-297799
	
	
	==> coredns [1f49db7287541769a9c5841ac45bbca52be912aa3fd7d5bae1cc9d5d22542ced] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9d9b00ee071e357b4ca0b9d6ec91567f297a0fe207a61e6538b6eda02418b060] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-297799
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-297799
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=no-preload-297799
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_53_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:53:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-297799
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:08:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:03:55 +0000   Mon, 29 Jul 2024 11:53:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:03:55 +0000   Mon, 29 Jul 2024 11:53:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:03:55 +0000   Mon, 29 Jul 2024 11:53:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:03:55 +0000   Mon, 29 Jul 2024 11:53:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    no-preload-297799
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d5c091591c34b2b97ae36f53988a04d
	  System UUID:                7d5c0915-91c3-4b2b-97ae-36f53988a04d
	  Boot ID:                    6c6ddb4a-0129-452b-989d-c392393f37ba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-7n6s7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5cfdc65f69-bnqrr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-297799                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-297799             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-297799    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-blx4g                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-297799             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-78fcd8795b-vxjvd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-297799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-297799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-297799 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-297799 event: Registered Node no-preload-297799 in Controller
	
	
	==> dmesg <==
	[  +0.041196] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.165941] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.681317] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.594233] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.440389] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.067447] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062707] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.169751] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.153169] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.299997] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +15.292596] systemd-fstab-generator[1174]: Ignoring "noauto" option for root device
	[  +0.066209] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.902442] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +3.884564] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.253377] kauditd_printk_skb: 57 callbacks suppressed
	[Jul29 11:49] kauditd_printk_skb: 28 callbacks suppressed
	[Jul29 11:53] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.836367] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +4.460357] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.104556] systemd-fstab-generator[3276]: Ignoring "noauto" option for root device
	[  +5.412695] systemd-fstab-generator[3392]: Ignoring "noauto" option for root device
	[  +0.051433] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.210878] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [7b405cd58267974debe4f0092d98b65ec7ee23d139828e582e2d05c34185dc81] <==
	{"level":"info","ts":"2024-07-29T11:53:26.677614Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:53:26.677807Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:53:26.686728Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T11:53:26.686868Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:53:26.686961Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:53:26.688147Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T11:53:26.696624Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.120:2379"}
	{"level":"info","ts":"2024-07-29T11:53:26.69704Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f3de5e1602edc73b","local-member-id":"af2c917f7a70ddd0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:53:26.706372Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:53:26.706507Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:53:26.710714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T12:03:27.122251Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-07-29T12:03:27.132253Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":684,"took":"9.181662ms","hash":2927602241,"current-db-size-bytes":2097152,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2097152,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-29T12:03:27.132358Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2927602241,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T12:08:21.223162Z","caller":"traceutil/trace.go:171","msg":"trace[1406881296] linearizableReadLoop","detail":"{readStateIndex:1356; appliedIndex:1355; }","duration":"404.336809ms","start":"2024-07-29T12:08:20.818703Z","end":"2024-07-29T12:08:21.22304Z","steps":["trace[1406881296] 'read index received'  (duration: 308.344365ms)","trace[1406881296] 'applied index is now lower than readState.Index'  (duration: 95.990614ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:08:21.2234Z","caller":"traceutil/trace.go:171","msg":"trace[721255073] transaction","detail":"{read_only:false; response_revision:1166; number_of_response:1; }","duration":"439.15523ms","start":"2024-07-29T12:08:20.784229Z","end":"2024-07-29T12:08:21.223385Z","steps":["trace[721255073] 'process raft request'  (duration: 342.820635ms)","trace[721255073] 'compare'  (duration: 95.814108ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:08:21.22406Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.97463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T12:08:21.224239Z","caller":"traceutil/trace.go:171","msg":"trace[1433763611] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1166; }","duration":"251.169177ms","start":"2024-07-29T12:08:20.973058Z","end":"2024-07-29T12:08:21.224227Z","steps":["trace[1433763611] 'agreement among raft nodes before linearized reading'  (duration: 250.905786ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:08:21.224488Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"405.778394ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T12:08:21.224629Z","caller":"traceutil/trace.go:171","msg":"trace[1970247776] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1166; }","duration":"405.921045ms","start":"2024-07-29T12:08:20.818699Z","end":"2024-07-29T12:08:21.22462Z","steps":["trace[1970247776] 'agreement among raft nodes before linearized reading'  (duration: 405.761835ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:08:21.224711Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T12:08:20.818666Z","time spent":"406.035731ms","remote":"127.0.0.1:58866","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-07-29T12:08:21.224896Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.005165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T12:08:21.225325Z","caller":"traceutil/trace.go:171","msg":"trace[734385337] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:1166; }","duration":"180.430905ms","start":"2024-07-29T12:08:21.044882Z","end":"2024-07-29T12:08:21.225313Z","steps":["trace[734385337] 'agreement among raft nodes before linearized reading'  (duration: 179.991857ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:08:21.224589Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T12:08:20.784213Z","time spent":"439.219471ms","remote":"127.0.0.1:58708","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.120\" mod_revision:1158 > success:<request_put:<key:\"/registry/masterleases/192.168.39.120\" value_size:67 lease:6760062462731990194 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.120\" > >"}
	{"level":"info","ts":"2024-07-29T12:08:21.98278Z","caller":"traceutil/trace.go:171","msg":"trace[1130420968] transaction","detail":"{read_only:false; response_revision:1167; number_of_response:1; }","duration":"113.408427ms","start":"2024-07-29T12:08:21.869279Z","end":"2024-07-29T12:08:21.982687Z","steps":["trace[1130420968] 'process raft request'  (duration: 113.094098ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:08:26 up 20 min,  0 users,  load average: 0.26, 0.21, 0.14
	Linux no-preload-297799 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1520e4956aff08169cf1bf92e39d8bb7cf1776327f167b35c5aa180b1955ebea] <==
	W0729 12:03:29.822917       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 12:03:29.823356       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 12:03:29.824441       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 12:03:29.824478       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:04:29.825605       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 12:04:29.825691       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 12:04:29.825723       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 12:04:29.825734       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 12:04:29.826910       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 12:04:29.826955       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 12:06:29.827149       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 12:06:29.827333       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 12:06:29.827161       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 12:06:29.827385       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 12:06:29.828610       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 12:06:29.828660       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [2e605ca417408e170cea9d80ff78a3aa5e15c87e2e8fb47235fed33945f24a30] <==
	W0729 11:53:21.985642       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:21.991397       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.017518       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.039702       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.074317       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.095691       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.108619       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.171295       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.178856       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.185594       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.186958       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.197508       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.262350       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.265851       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.280617       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.300158       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.337259       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.350010       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.423606       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.453450       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.477415       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.552579       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.563574       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.718806       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 11:53:22.777630       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [ace2035e6f2a6d7753c6b1c97aa6f8e5cdd7ae57bd452cb7a025bac96d958763] <==
	E0729 12:03:06.800719       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:03:06.856568       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:03:36.807802       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:03:36.865243       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 12:03:55.368961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-297799"
	E0729 12:04:06.815937       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:04:06.873250       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:04:36.822743       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:04:36.882906       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 12:04:47.817681       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="382.801µs"
	I0729 12:04:59.809765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="93.782µs"
	E0729 12:05:06.830258       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:05:06.895061       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:05:36.837438       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:05:36.904842       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:06:06.844268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:06:06.913691       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:06:36.850965       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:06:36.922271       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:07:06.857955       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:07:06.935925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:07:36.865998       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:07:36.945645       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 12:08:06.873416       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 12:08:06.953006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c47520a7ce939c00172d33ae5b6db2c81fbb7301ec269f327baaf9097a15d8b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 11:53:37.658814       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 11:53:37.698616       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.120"]
	E0729 11:53:37.698696       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 11:53:37.792392       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 11:53:37.792422       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:53:37.792453       1 server_linux.go:170] "Using iptables Proxier"
	I0729 11:53:37.862154       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 11:53:37.862406       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 11:53:37.862417       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:53:37.864052       1 config.go:197] "Starting service config controller"
	I0729 11:53:37.864136       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:53:37.864170       1 config.go:104] "Starting endpoint slice config controller"
	I0729 11:53:37.864177       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:53:37.864813       1 config.go:326] "Starting node config controller"
	I0729 11:53:37.864820       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:53:37.966212       1 shared_informer.go:320] Caches are synced for node config
	I0729 11:53:37.966260       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:53:37.966280       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b9849f4439601378b9024a3ea6af8212ca083ca359a0ba4e40a5f2b18a3d8179] <==
	W0729 11:53:29.720438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:53:29.720555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.775342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:53:29.775792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.776065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:53:29.776190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.889483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:53:29.889585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.932286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:53:29.933655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.953299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:53:29.953412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:29.987184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:53:29.987716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:30.052218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:53:30.052432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:30.086323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:53:30.086370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:30.116768       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:53:30.116854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:30.142940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:53:30.143030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 11:53:30.232502       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:53:30.232562       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0729 11:53:32.636551       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:05:31 no-preload-297799 kubelet[3283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:05:31 no-preload-297799 kubelet[3283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:05:38 no-preload-297799 kubelet[3283]: E0729 12:05:38.792975    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:05:50 no-preload-297799 kubelet[3283]: E0729 12:05:50.792452    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:06:04 no-preload-297799 kubelet[3283]: E0729 12:06:04.792308    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:06:19 no-preload-297799 kubelet[3283]: E0729 12:06:19.793643    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:06:31 no-preload-297799 kubelet[3283]: E0729 12:06:31.858882    3283 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:06:31 no-preload-297799 kubelet[3283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:06:31 no-preload-297799 kubelet[3283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:06:31 no-preload-297799 kubelet[3283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:06:31 no-preload-297799 kubelet[3283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:06:34 no-preload-297799 kubelet[3283]: E0729 12:06:34.792028    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:06:45 no-preload-297799 kubelet[3283]: E0729 12:06:45.793366    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:07:00 no-preload-297799 kubelet[3283]: E0729 12:07:00.791758    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:07:13 no-preload-297799 kubelet[3283]: E0729 12:07:13.794932    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:07:25 no-preload-297799 kubelet[3283]: E0729 12:07:25.792263    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:07:31 no-preload-297799 kubelet[3283]: E0729 12:07:31.856751    3283 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:07:31 no-preload-297799 kubelet[3283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:07:31 no-preload-297799 kubelet[3283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:07:31 no-preload-297799 kubelet[3283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:07:31 no-preload-297799 kubelet[3283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:07:40 no-preload-297799 kubelet[3283]: E0729 12:07:40.792422    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:07:53 no-preload-297799 kubelet[3283]: E0729 12:07:53.792711    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:08:08 no-preload-297799 kubelet[3283]: E0729 12:08:08.792403    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	Jul 29 12:08:19 no-preload-297799 kubelet[3283]: E0729 12:08:19.794725    3283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vxjvd" podUID="8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd"
	
	
	==> storage-provisioner [10ee36092b45794c1091ae3dcb9c5c38bb131a59b45cde74b042de394332fbee] <==
	I0729 11:53:38.861735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:53:38.874931       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:53:38.875052       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:53:38.902843       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:53:38.903436       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-297799_95e9fa6c-2c45-456e-913f-3f4c61b05e4a!
	I0729 11:53:38.914201       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f8fbb97-e00d-4237-a520-406fd1ced5fc", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-297799_95e9fa6c-2c45-456e-913f-3f4c61b05e4a became leader
	I0729 11:53:39.004815       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-297799_95e9fa6c-2c45-456e-913f-3f4c61b05e4a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-297799 -n no-preload-297799
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-297799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-vxjvd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-297799 describe pod metrics-server-78fcd8795b-vxjvd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-297799 describe pod metrics-server-78fcd8795b-vxjvd: exit status 1 (79.745327ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-vxjvd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-297799 describe pod metrics-server-78fcd8795b-vxjvd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (336.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (149.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:05:36.673263   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:06:06.562955   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:06:20.496837   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:07:06.188269   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
E0729 12:07:13.094568   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.61:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.61:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188043 -n old-k8s-version-188043
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 2 (229.529865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-188043" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-188043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-188043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.299µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-188043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 2 (225.211122ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-188043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-188043 logs -n 25: (1.61679228s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184479 sudo cat                              | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo                                  | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo find                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184479 sudo crio                             | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184479                                       | bridge-184479                | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-574387 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | disable-driver-mounts-574387                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:41 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-297799             | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC | 29 Jul 24 11:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-297799                                   | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-731235            | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC | 29 Jul 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-754486  | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC | 29 Jul 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-297799                  | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-297799 --memory=2200                     | no-preload-297799            | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC | 29 Jul 24 11:53 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-188043        | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-731235                 | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-731235                                  | embed-certs-731235           | jenkins | v1.33.1 | 29 Jul 24 11:43 UTC | 29 Jul 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754486       | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754486 | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:53 UTC |
	|         | default-k8s-diff-port-754486                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-188043             | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC | 29 Jul 24 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-188043                              | old-k8s-version-188043       | jenkins | v1.33.1 | 29 Jul 24 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:44:35.218497   70480 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:44:35.218591   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218595   70480 out.go:304] Setting ErrFile to fd 2...
	I0729 11:44:35.218599   70480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:44:35.218821   70480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:44:35.219357   70480 out.go:298] Setting JSON to false
	I0729 11:44:35.220249   70480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5221,"bootTime":1722248254,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:44:35.220304   70480 start.go:139] virtualization: kvm guest
	I0729 11:44:35.222361   70480 out.go:177] * [old-k8s-version-188043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:44:35.223849   70480 notify.go:220] Checking for updates...
	I0729 11:44:35.223857   70480 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:44:35.225068   70480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:44:35.226167   70480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:44:35.227448   70480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:44:35.228516   70480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:44:35.229771   70480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:44:35.231218   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:44:35.231601   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.231634   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.246457   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0729 11:44:35.246861   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.247335   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.247371   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.247712   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.247899   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.249642   70480 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 11:44:35.250805   70480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:44:35.251114   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:44:35.251152   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:44:35.265900   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0729 11:44:35.266309   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:44:35.266767   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:44:35.266794   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:44:35.267099   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:44:35.267293   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:44:35.302182   70480 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:44:35.303575   70480 start.go:297] selected driver: kvm2
	I0729 11:44:35.303593   70480 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.303689   70480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:44:35.304334   70480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.304396   70480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:44:35.319674   70480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:44:35.320023   70480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:44:35.320054   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:44:35.320061   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:44:35.320111   70480 start.go:340] cluster config:
	{Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:44:35.320210   70480 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:44:35.322089   70480 out.go:177] * Starting "old-k8s-version-188043" primary control-plane node in "old-k8s-version-188043" cluster
	I0729 11:44:38.643004   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:35.323173   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:44:35.323208   70480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 11:44:35.323221   70480 cache.go:56] Caching tarball of preloaded images
	I0729 11:44:35.323282   70480 preload.go:172] Found /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:44:35.323291   70480 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 11:44:35.323390   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:44:35.323557   70480 start.go:360] acquireMachinesLock for old-k8s-version-188043: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:44:41.714983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:47.794983   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:50.867015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:44:56.946962   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:00.019017   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:06.099000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:09.171008   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:15.250989   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:18.322956   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:24.403015   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:27.474951   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:33.554944   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:36.627002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:42.706993   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:45.779000   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:51.858998   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:45:54.931013   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:01.011021   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:04.082938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:10.162988   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:13.235043   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:19.314994   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:22.386953   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:28.467078   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:31.539011   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:37.618990   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:40.690995   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:46.770999   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:49.842938   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:55.923002   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:46:58.994960   69419 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I0729 11:47:01.999190   69907 start.go:364] duration metric: took 3m42.920247555s to acquireMachinesLock for "embed-certs-731235"
	I0729 11:47:01.999237   69907 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:01.999244   69907 fix.go:54] fixHost starting: 
	I0729 11:47:01.999548   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:01.999574   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:02.014481   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I0729 11:47:02.014934   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:02.015374   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:47:02.015392   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:02.015726   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:02.015911   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:02.016062   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:47:02.017570   69907 fix.go:112] recreateIfNeeded on embed-certs-731235: state=Stopped err=<nil>
	I0729 11:47:02.017606   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	W0729 11:47:02.017758   69907 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:02.020459   69907 out.go:177] * Restarting existing kvm2 VM for "embed-certs-731235" ...
	I0729 11:47:02.021770   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Start
	I0729 11:47:02.021904   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring networks are active...
	I0729 11:47:02.022551   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network default is active
	I0729 11:47:02.022943   69907 main.go:141] libmachine: (embed-certs-731235) Ensuring network mk-embed-certs-731235 is active
	I0729 11:47:02.023347   69907 main.go:141] libmachine: (embed-certs-731235) Getting domain xml...
	I0729 11:47:02.023972   69907 main.go:141] libmachine: (embed-certs-731235) Creating domain...
	I0729 11:47:03.233906   69907 main.go:141] libmachine: (embed-certs-731235) Waiting to get IP...
	I0729 11:47:03.234807   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.235200   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.235266   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.235191   70997 retry.go:31] will retry after 267.737911ms: waiting for machine to come up
	I0729 11:47:03.504861   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.505460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.505485   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.505418   70997 retry.go:31] will retry after 246.310337ms: waiting for machine to come up
	I0729 11:47:03.753068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:03.753558   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:03.753587   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:03.753520   70997 retry.go:31] will retry after 374.497339ms: waiting for machine to come up
	I0729 11:47:01.996514   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:01.996575   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.996873   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:47:01.996897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:47:01.997094   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:47:01.999070   69419 machine.go:97] duration metric: took 4m37.426222817s to provisionDockerMachine
	I0729 11:47:01.999113   69419 fix.go:56] duration metric: took 4m37.448019985s for fixHost
	I0729 11:47:01.999122   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 4m37.448042995s
	W0729 11:47:01.999140   69419 start.go:714] error starting host: provision: host is not running
	W0729 11:47:01.999247   69419 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 11:47:01.999257   69419 start.go:729] Will try again in 5 seconds ...
	I0729 11:47:04.130170   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.130603   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.130625   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.130557   70997 retry.go:31] will retry after 500.810762ms: waiting for machine to come up
	I0729 11:47:04.632773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:04.633142   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:04.633196   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:04.633094   70997 retry.go:31] will retry after 499.805121ms: waiting for machine to come up
	I0729 11:47:05.135101   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.135685   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.135714   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.135610   70997 retry.go:31] will retry after 713.805425ms: waiting for machine to come up
	I0729 11:47:05.850525   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:05.850950   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:05.850979   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:05.850918   70997 retry.go:31] will retry after 940.40593ms: waiting for machine to come up
	I0729 11:47:06.792982   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:06.793406   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:06.793433   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:06.793344   70997 retry.go:31] will retry after 1.216752167s: waiting for machine to come up
	I0729 11:47:08.012264   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:08.012748   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:08.012773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:08.012692   70997 retry.go:31] will retry after 1.729849311s: waiting for machine to come up
	I0729 11:47:07.000812   69419 start.go:360] acquireMachinesLock for no-preload-297799: {Name:mka8aac8533d9522bdb868559ea7715123a1a351 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:47:09.743735   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:09.744125   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:09.744144   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:09.744101   70997 retry.go:31] will retry after 2.251271574s: waiting for machine to come up
	I0729 11:47:11.998663   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:11.999213   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:11.999255   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:11.999163   70997 retry.go:31] will retry after 2.400718693s: waiting for machine to come up
	I0729 11:47:14.401005   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:14.401419   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:14.401442   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:14.401352   70997 retry.go:31] will retry after 3.073847413s: waiting for machine to come up
	I0729 11:47:17.477026   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:17.477424   69907 main.go:141] libmachine: (embed-certs-731235) DBG | unable to find current IP address of domain embed-certs-731235 in network mk-embed-certs-731235
	I0729 11:47:17.477460   69907 main.go:141] libmachine: (embed-certs-731235) DBG | I0729 11:47:17.477352   70997 retry.go:31] will retry after 3.28522497s: waiting for machine to come up
	I0729 11:47:22.076091   70231 start.go:364] duration metric: took 3m11.794715554s to acquireMachinesLock for "default-k8s-diff-port-754486"
	I0729 11:47:22.076162   70231 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:22.076177   70231 fix.go:54] fixHost starting: 
	I0729 11:47:22.076605   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:22.076644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:22.096370   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45159
	I0729 11:47:22.096731   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:22.097267   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:47:22.097296   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:22.097603   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:22.097812   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:22.097983   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:47:22.099583   70231 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754486: state=Stopped err=<nil>
	I0729 11:47:22.099607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	W0729 11:47:22.099762   70231 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:22.101982   70231 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754486" ...
	I0729 11:47:20.766989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767519   69907 main.go:141] libmachine: (embed-certs-731235) Found IP for machine: 192.168.61.202
	I0729 11:47:20.767544   69907 main.go:141] libmachine: (embed-certs-731235) Reserving static IP address...
	I0729 11:47:20.767560   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has current primary IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.767996   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.768025   69907 main.go:141] libmachine: (embed-certs-731235) DBG | skip adding static IP to network mk-embed-certs-731235 - found existing host DHCP lease matching {name: "embed-certs-731235", mac: "52:54:00:8a:bd:81", ip: "192.168.61.202"}
	I0729 11:47:20.768043   69907 main.go:141] libmachine: (embed-certs-731235) Reserved static IP address: 192.168.61.202
	I0729 11:47:20.768060   69907 main.go:141] libmachine: (embed-certs-731235) Waiting for SSH to be available...
	I0729 11:47:20.768068   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Getting to WaitForSSH function...
	I0729 11:47:20.770325   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770639   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.770667   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.770863   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH client type: external
	I0729 11:47:20.770894   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa (-rw-------)
	I0729 11:47:20.770927   69907 main.go:141] libmachine: (embed-certs-731235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:20.770943   69907 main.go:141] libmachine: (embed-certs-731235) DBG | About to run SSH command:
	I0729 11:47:20.770960   69907 main.go:141] libmachine: (embed-certs-731235) DBG | exit 0
	I0729 11:47:20.895074   69907 main.go:141] libmachine: (embed-certs-731235) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:20.895473   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetConfigRaw
	I0729 11:47:20.896121   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:20.898342   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.898673   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.898717   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.899017   69907 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/config.json ...
	I0729 11:47:20.899239   69907 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:20.899262   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:20.899464   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:20.901688   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902056   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:20.902099   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:20.902249   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:20.902412   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902579   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:20.902715   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:20.902857   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:20.903102   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:20.903118   69907 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:21.007368   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:21.007403   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007682   69907 buildroot.go:166] provisioning hostname "embed-certs-731235"
	I0729 11:47:21.007708   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.007928   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.010883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011268   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.011308   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.011465   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.011634   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011779   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.011950   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.012121   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.012314   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.012334   69907 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-731235 && echo "embed-certs-731235" | sudo tee /etc/hostname
	I0729 11:47:21.129877   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-731235
	
	I0729 11:47:21.129907   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.133055   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133390   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.133411   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.133614   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.133806   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.133977   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.134156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.134317   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.134480   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.134495   69907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-731235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-731235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-731235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:21.247997   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:21.248029   69907 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:21.248056   69907 buildroot.go:174] setting up certificates
	I0729 11:47:21.248067   69907 provision.go:84] configureAuth start
	I0729 11:47:21.248075   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetMachineName
	I0729 11:47:21.248361   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:21.251377   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251711   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.251738   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.251908   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.254107   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254493   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.254521   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.254721   69907 provision.go:143] copyHostCerts
	I0729 11:47:21.254788   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:21.254801   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:21.254896   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:21.255008   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:21.255019   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:21.255058   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:21.255138   69907 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:21.255148   69907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:21.255183   69907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:21.255257   69907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.embed-certs-731235 san=[127.0.0.1 192.168.61.202 embed-certs-731235 localhost minikube]
	I0729 11:47:21.398780   69907 provision.go:177] copyRemoteCerts
	I0729 11:47:21.398833   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:21.398858   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.401840   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402259   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.402282   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.402483   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.402661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.402992   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.403139   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.484883   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 11:47:21.509042   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:47:21.532327   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:21.556013   69907 provision.go:87] duration metric: took 307.934726ms to configureAuth
	I0729 11:47:21.556040   69907 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:21.556258   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:21.556337   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.558962   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559347   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.559372   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.559518   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.559699   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.559861   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.560004   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.560157   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.560337   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.560356   69907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:21.834240   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:21.834270   69907 machine.go:97] duration metric: took 935.015622ms to provisionDockerMachine
	I0729 11:47:21.834284   69907 start.go:293] postStartSetup for "embed-certs-731235" (driver="kvm2")
	I0729 11:47:21.834299   69907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:21.834325   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:21.834638   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:21.834671   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.837313   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837712   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.837751   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.837857   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.838022   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.838229   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.838357   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:21.922275   69907 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:21.926932   69907 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:21.926955   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:21.927027   69907 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:21.927136   69907 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:21.927219   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:21.937122   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:21.964493   69907 start.go:296] duration metric: took 130.192874ms for postStartSetup
	I0729 11:47:21.964533   69907 fix.go:56] duration metric: took 19.965288806s for fixHost
	I0729 11:47:21.964554   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:21.967318   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967652   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:21.967682   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:21.967850   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:21.968066   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968222   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:21.968356   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:21.968509   69907 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:21.968717   69907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0729 11:47:21.968731   69907 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:22.075873   69907 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253642.050121254
	
	I0729 11:47:22.075893   69907 fix.go:216] guest clock: 1722253642.050121254
	I0729 11:47:22.075900   69907 fix.go:229] Guest: 2024-07-29 11:47:22.050121254 +0000 UTC Remote: 2024-07-29 11:47:21.964537244 +0000 UTC m=+243.027106048 (delta=85.58401ms)
	I0729 11:47:22.075927   69907 fix.go:200] guest clock delta is within tolerance: 85.58401ms
	I0729 11:47:22.075933   69907 start.go:83] releasing machines lock for "embed-certs-731235", held for 20.076714897s
	I0729 11:47:22.075958   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.076265   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:22.079236   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079566   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.079604   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.079771   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080311   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080491   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:47:22.080573   69907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:22.080644   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.080719   69907 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:22.080743   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:47:22.083401   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083438   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083743   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083773   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083883   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:22.083904   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:22.083917   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084061   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:47:22.084156   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084378   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084389   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:47:22.084565   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:47:22.084573   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.084691   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:47:22.188025   69907 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:22.194866   69907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:22.344382   69907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:22.350719   69907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:22.350809   69907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:22.371783   69907 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:22.371814   69907 start.go:495] detecting cgroup driver to use...
	I0729 11:47:22.371874   69907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:22.387899   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:22.401722   69907 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:22.401790   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:22.415295   69907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:22.429209   69907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:22.541230   69907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:22.705734   69907 docker.go:233] disabling docker service ...
	I0729 11:47:22.705811   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:22.720716   69907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:22.736719   69907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:22.865574   69907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:22.994470   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:23.018115   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:23.037125   69907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:23.037210   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.048702   69907 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:23.048768   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.061785   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.074734   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.087639   69907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:23.101010   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.113893   69907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.134264   69907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:23.147422   69907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:23.158168   69907 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:23.158220   69907 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:23.175245   69907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:23.190456   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:23.314426   69907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:23.459513   69907 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:23.459584   69907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:23.464829   69907 start.go:563] Will wait 60s for crictl version
	I0729 11:47:23.464899   69907 ssh_runner.go:195] Run: which crictl
	I0729 11:47:23.468768   69907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:23.508694   69907 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:23.508811   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.537048   69907 ssh_runner.go:195] Run: crio --version
	I0729 11:47:23.569189   69907 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:23.570566   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetIP
	I0729 11:47:23.573554   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.573918   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:47:23.573946   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:47:23.574198   69907 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:23.578543   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:23.591660   69907 kubeadm.go:883] updating cluster {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:23.591803   69907 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:23.591862   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:23.629355   69907 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:23.629423   69907 ssh_runner.go:195] Run: which lz4
	I0729 11:47:23.633713   69907 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:23.638463   69907 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:23.638491   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:22.103288   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Start
	I0729 11:47:22.103502   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring networks are active...
	I0729 11:47:22.104291   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network default is active
	I0729 11:47:22.104576   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Ensuring network mk-default-k8s-diff-port-754486 is active
	I0729 11:47:22.105037   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Getting domain xml...
	I0729 11:47:22.105746   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Creating domain...
	I0729 11:47:23.370011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting to get IP...
	I0729 11:47:23.370892   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.371318   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.371249   71147 retry.go:31] will retry after 303.24713ms: waiting for machine to come up
	I0729 11:47:23.675985   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676540   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:23.676567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:23.676486   71147 retry.go:31] will retry after 332.87749ms: waiting for machine to come up
	I0729 11:47:24.010822   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011360   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.011388   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.011312   71147 retry.go:31] will retry after 465.260924ms: waiting for machine to come up
	I0729 11:47:24.477939   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478471   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.478517   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.478431   71147 retry.go:31] will retry after 501.294487ms: waiting for machine to come up
	I0729 11:47:24.981168   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:24.981736   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:24.981647   71147 retry.go:31] will retry after 522.082731ms: waiting for machine to come up
	I0729 11:47:25.165725   69907 crio.go:462] duration metric: took 1.532044107s to copy over tarball
	I0729 11:47:25.165811   69907 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:27.422770   69907 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256906507s)
	I0729 11:47:27.422807   69907 crio.go:469] duration metric: took 2.257052359s to extract the tarball
	I0729 11:47:27.422817   69907 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:27.460807   69907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:27.509129   69907 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:27.509157   69907 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:27.509166   69907 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.30.3 crio true true} ...
	I0729 11:47:27.509281   69907 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-731235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:27.509347   69907 ssh_runner.go:195] Run: crio config
	I0729 11:47:27.560098   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:27.560121   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:27.560133   69907 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:27.560152   69907 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-731235 NodeName:embed-certs-731235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:27.560290   69907 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-731235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:27.560345   69907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:27.570464   69907 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:27.570555   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:27.580535   69907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 11:47:27.598211   69907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:27.615318   69907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 11:47:27.632974   69907 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:27.636858   69907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:27.649277   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:27.763642   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:27.781529   69907 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235 for IP: 192.168.61.202
	I0729 11:47:27.781556   69907 certs.go:194] generating shared ca certs ...
	I0729 11:47:27.781577   69907 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:27.781758   69907 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:27.781812   69907 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:27.781825   69907 certs.go:256] generating profile certs ...
	I0729 11:47:27.781950   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/client.key
	I0729 11:47:27.782036   69907 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key.6ae4b4bc
	I0729 11:47:27.782091   69907 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key
	I0729 11:47:27.782234   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:27.782278   69907 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:27.782291   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:27.782323   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:27.782358   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:27.782388   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:27.782440   69907 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:27.783361   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:27.813522   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:27.841190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:27.877646   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:27.919310   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 11:47:27.952080   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:47:27.985958   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:28.010190   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/embed-certs-731235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:28.034756   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:28.059541   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:28.083582   69907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:28.113030   69907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:28.133424   69907 ssh_runner.go:195] Run: openssl version
	I0729 11:47:28.139250   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:28.150142   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154885   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.154934   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:28.160995   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:28.172031   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:28.184289   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189071   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.189132   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:28.194963   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:28.205555   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:28.216328   69907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221023   69907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.221091   69907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:28.227053   69907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:28.238044   69907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:28.242748   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:28.248989   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:28.255165   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:28.261178   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:28.266997   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:28.272966   69907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:28.278994   69907 kubeadm.go:392] StartCluster: {Name:embed-certs-731235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-731235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:28.279100   69907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:28.279142   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.317620   69907 cri.go:89] found id: ""
	I0729 11:47:28.317701   69907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:28.328260   69907 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:28.328285   69907 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:28.328365   69907 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:28.338356   69907 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:28.339293   69907 kubeconfig.go:125] found "embed-certs-731235" server: "https://192.168.61.202:8443"
	I0729 11:47:28.341224   69907 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:28.351166   69907 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0729 11:47:28.351203   69907 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:28.351215   69907 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:28.351271   69907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:28.393883   69907 cri.go:89] found id: ""
	I0729 11:47:28.393986   69907 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:28.411298   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:28.421328   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:28.421362   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:28.421406   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:47:28.430665   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:28.430746   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:28.440426   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:47:28.450406   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:28.450466   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:28.460200   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.469699   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:28.469771   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:28.479855   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:47:28.489251   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:28.489346   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:28.499019   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:28.508770   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:28.644277   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:25.505636   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:25.506255   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:25.506195   71147 retry.go:31] will retry after 748.410801ms: waiting for machine to come up
	I0729 11:47:26.255894   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256293   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:26.256313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:26.256252   71147 retry.go:31] will retry after 1.1735659s: waiting for machine to come up
	I0729 11:47:27.430990   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:27.431494   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:27.431400   71147 retry.go:31] will retry after 1.448031075s: waiting for machine to come up
	I0729 11:47:28.880998   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881455   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:28.881483   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:28.881413   71147 retry.go:31] will retry after 1.123855306s: waiting for machine to come up
	I0729 11:47:30.006750   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007231   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:30.007261   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:30.007176   71147 retry.go:31] will retry after 2.180202817s: waiting for machine to come up
	I0729 11:47:30.200484   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.556171661s)
	I0729 11:47:30.200515   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.427523   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.499256   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:30.603274   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:30.603360   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.104293   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.603524   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:31.621119   69907 api_server.go:72] duration metric: took 1.01784341s to wait for apiserver process to appear ...
	I0729 11:47:31.621152   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:31.621173   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:31.621755   69907 api_server.go:269] stopped: https://192.168.61.202:8443/healthz: Get "https://192.168.61.202:8443/healthz": dial tcp 192.168.61.202:8443: connect: connection refused
	I0729 11:47:32.121931   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:32.188652   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189149   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:32.189200   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:32.189120   71147 retry.go:31] will retry after 2.231222575s: waiting for machine to come up
	I0729 11:47:34.421672   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422102   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:34.422130   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:34.422062   71147 retry.go:31] will retry after 2.830311758s: waiting for machine to come up
	I0729 11:47:34.187391   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.187427   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.187450   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.199953   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:34.199994   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:34.621483   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:34.639389   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:34.639423   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.121653   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.130808   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.130843   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:35.621391   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:35.626072   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:35.626116   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.122245   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.126823   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.126851   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:36.621364   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:36.625781   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:36.625810   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.121848   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.126505   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:37.126537   69907 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:37.622175   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:47:37.628241   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:47:37.634638   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:37.634668   69907 api_server.go:131] duration metric: took 6.013509305s to wait for apiserver health ...
	I0729 11:47:37.634677   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:47:37.634684   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:37.636740   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:37.638144   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:37.649816   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:37.670562   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:37.680377   69907 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:37.680408   69907 system_pods.go:61] "coredns-7db6d8ff4d-kwx89" [f2a3fdcb-2794-470e-a1b4-fe264fb5613a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:37.680414   69907 system_pods.go:61] "etcd-embed-certs-731235" [a99bcf99-7242-4383-aa2d-597e817004db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:37.680421   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [302c4cda-07d4-46ec-af59-3339a2b91049] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:37.680426   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [dae9ef32-63c1-4865-9569-ea1f11c9526c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:37.680430   69907 system_pods.go:61] "kube-proxy-hw66r" [97610503-7ca0-4d0c-8d73-249f2a48ef9a] Running
	I0729 11:47:37.680436   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [144902be-bea5-493c-986d-3834c22d82d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:37.680445   69907 system_pods.go:61] "metrics-server-569cc877fc-vqgtm" [75d59d71-3fb3-4383-bd90-3362f6b40694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:37.680449   69907 system_pods.go:61] "storage-provisioner" [24f74df4-0657-481b-9af8-f8b5c94684ea] Running
	I0729 11:47:37.680454   69907 system_pods.go:74] duration metric: took 9.870611ms to wait for pod list to return data ...
	I0729 11:47:37.680460   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:37.683573   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:37.683595   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:37.683607   69907 node_conditions.go:105] duration metric: took 3.142611ms to run NodePressure ...
	I0729 11:47:37.683626   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:37.964162   69907 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968288   69907 kubeadm.go:739] kubelet initialised
	I0729 11:47:37.968308   69907 kubeadm.go:740] duration metric: took 4.123333ms waiting for restarted kubelet to initialise ...
	I0729 11:47:37.968316   69907 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:37.972978   69907 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.977070   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977088   69907 pod_ready.go:81] duration metric: took 4.090197ms for pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.977097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "coredns-7db6d8ff4d-kwx89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.977102   69907 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.981499   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981535   69907 pod_ready.go:81] duration metric: took 4.424741ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.981543   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "etcd-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.981550   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.986064   69907 pod_ready.go:97] node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986084   69907 pod_ready.go:81] duration metric: took 4.52445ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:37.986097   69907 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-731235" hosting pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-731235" has status "Ready":"False"
	I0729 11:47:37.986103   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:37.254312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254680   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | unable to find current IP address of domain default-k8s-diff-port-754486 in network mk-default-k8s-diff-port-754486
	I0729 11:47:37.254757   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | I0729 11:47:37.254658   71147 retry.go:31] will retry after 3.980350875s: waiting for machine to come up
	I0729 11:47:42.625107   70480 start.go:364] duration metric: took 3m7.301517115s to acquireMachinesLock for "old-k8s-version-188043"
	I0729 11:47:42.625180   70480 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:47:42.625189   70480 fix.go:54] fixHost starting: 
	I0729 11:47:42.625660   70480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:42.625704   70480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:42.644136   70480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
	I0729 11:47:42.644661   70480 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:42.645210   70480 main.go:141] libmachine: Using API Version  1
	I0729 11:47:42.645242   70480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:42.645570   70480 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:42.645748   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:47:42.645875   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetState
	I0729 11:47:42.647762   70480 fix.go:112] recreateIfNeeded on old-k8s-version-188043: state=Stopped err=<nil>
	I0729 11:47:42.647808   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	W0729 11:47:42.647970   70480 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:47:42.649883   70480 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-188043" ...
	I0729 11:47:39.992010   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:41.992091   69907 pod_ready.go:102] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:43.494150   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.494177   69907 pod_ready.go:81] duration metric: took 5.508061336s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.494186   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500158   69907 pod_ready.go:92] pod "kube-proxy-hw66r" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:43.500186   69907 pod_ready.go:81] duration metric: took 5.992092ms for pod "kube-proxy-hw66r" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:43.500198   69907 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:41.239616   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240073   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Found IP for machine: 192.168.50.111
	I0729 11:47:41.240103   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has current primary IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.240110   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserving static IP address...
	I0729 11:47:41.240474   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.240501   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Reserved static IP address: 192.168.50.111
	I0729 11:47:41.240529   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | skip adding static IP to network mk-default-k8s-diff-port-754486 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754486", mac: "52:54:00:c1:06:44", ip: "192.168.50.111"}
	I0729 11:47:41.240549   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Getting to WaitForSSH function...
	I0729 11:47:41.240567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Waiting for SSH to be available...
	I0729 11:47:41.242523   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.242938   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.242970   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.243112   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH client type: external
	I0729 11:47:41.243140   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa (-rw-------)
	I0729 11:47:41.243171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:47:41.243185   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | About to run SSH command:
	I0729 11:47:41.243198   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | exit 0
	I0729 11:47:41.366827   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | SSH cmd err, output: <nil>: 
	I0729 11:47:41.367268   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetConfigRaw
	I0729 11:47:41.367885   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.370241   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370574   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.370605   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.370867   70231 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/config.json ...
	I0729 11:47:41.371157   70231 machine.go:94] provisionDockerMachine start ...
	I0729 11:47:41.371184   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:41.371408   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.374380   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374770   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.374805   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.374920   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.375098   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375245   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.375362   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.375555   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.375784   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.375801   70231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:47:41.479220   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:47:41.479262   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479528   70231 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754486"
	I0729 11:47:41.479555   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.479744   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.482542   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.482869   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.482903   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.483074   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.483282   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483442   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.483611   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.483828   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.484029   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.484048   70231 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754486 && echo "default-k8s-diff-port-754486" | sudo tee /etc/hostname
	I0729 11:47:41.605605   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754486
	
	I0729 11:47:41.605639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.608313   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.608698   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.608910   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.609126   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.609498   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.609650   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:41.609845   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:41.609862   70231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754486/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:47:41.724183   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:47:41.724209   70231 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:47:41.724237   70231 buildroot.go:174] setting up certificates
	I0729 11:47:41.724246   70231 provision.go:84] configureAuth start
	I0729 11:47:41.724256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetMachineName
	I0729 11:47:41.724530   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:41.727462   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.727826   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.727858   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.728009   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.730256   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730639   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.730683   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.730768   70231 provision.go:143] copyHostCerts
	I0729 11:47:41.730822   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:47:41.730835   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:47:41.730904   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:47:41.731016   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:47:41.731026   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:47:41.731047   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:47:41.731151   70231 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:47:41.731161   70231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:47:41.731179   70231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:47:41.731238   70231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754486 san=[127.0.0.1 192.168.50.111 default-k8s-diff-port-754486 localhost minikube]
	I0729 11:47:41.930044   70231 provision.go:177] copyRemoteCerts
	I0729 11:47:41.930097   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:47:41.930127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:41.932832   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933158   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:41.933186   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:41.933378   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:41.933565   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:41.933723   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:41.933848   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.016885   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:47:42.042982   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 11:47:42.067813   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:47:42.092573   70231 provision.go:87] duration metric: took 368.315812ms to configureAuth
	I0729 11:47:42.092601   70231 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:47:42.092761   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:42.092829   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.095761   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096177   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.096223   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.096349   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.096571   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096751   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.096891   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.097056   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.097234   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.097251   70231 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:47:42.378448   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:47:42.378478   70231 machine.go:97] duration metric: took 1.007302295s to provisionDockerMachine
	I0729 11:47:42.378495   70231 start.go:293] postStartSetup for "default-k8s-diff-port-754486" (driver="kvm2")
	I0729 11:47:42.378511   70231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:47:42.378541   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.378917   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:47:42.378950   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.382127   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382539   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.382567   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.382759   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.382958   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.383171   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.383297   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.467524   70231 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:47:42.471793   70231 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:47:42.471815   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:47:42.471873   70231 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:47:42.471948   70231 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:47:42.472033   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:47:42.482148   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:42.507312   70231 start.go:296] duration metric: took 128.801138ms for postStartSetup
	I0729 11:47:42.507358   70231 fix.go:56] duration metric: took 20.43118839s for fixHost
	I0729 11:47:42.507384   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.510309   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510737   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.510769   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.510948   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.511195   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511373   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.511537   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.511694   70231 main.go:141] libmachine: Using SSH client type: native
	I0729 11:47:42.511844   70231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0729 11:47:42.511853   70231 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:47:42.624913   70231 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253662.599486483
	
	I0729 11:47:42.624946   70231 fix.go:216] guest clock: 1722253662.599486483
	I0729 11:47:42.624960   70231 fix.go:229] Guest: 2024-07-29 11:47:42.599486483 +0000 UTC Remote: 2024-07-29 11:47:42.507363501 +0000 UTC m=+212.369750509 (delta=92.122982ms)
	I0729 11:47:42.624988   70231 fix.go:200] guest clock delta is within tolerance: 92.122982ms
	I0729 11:47:42.625005   70231 start.go:83] releasing machines lock for "default-k8s-diff-port-754486", held for 20.548870778s
	I0729 11:47:42.625050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.625322   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:42.628299   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.628799   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.628834   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.629011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629659   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629860   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:47:42.629950   70231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:47:42.629997   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.630087   70231 ssh_runner.go:195] Run: cat /version.json
	I0729 11:47:42.630106   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:47:42.633122   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633432   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633464   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.633504   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.633890   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.633973   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:42.634044   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:42.634088   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:47:42.634312   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.634387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:47:42.634489   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.634906   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:47:42.635039   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:47:42.746128   70231 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:42.754711   70231 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:47:42.906989   70231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:47:42.913975   70231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:47:42.914035   70231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:47:42.931503   70231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:47:42.931535   70231 start.go:495] detecting cgroup driver to use...
	I0729 11:47:42.931591   70231 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:47:42.949385   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:47:42.965940   70231 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:47:42.965989   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:47:42.982952   70231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:47:43.000214   70231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:47:43.123333   70231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:47:43.266557   70231 docker.go:233] disabling docker service ...
	I0729 11:47:43.266637   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:47:43.282521   70231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:47:43.300091   70231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:47:43.440721   70231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:47:43.577985   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:47:43.598070   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:47:43.620282   70231 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:47:43.620343   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.633918   70231 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:47:43.634064   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.644931   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.660559   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.676307   70231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:47:43.687970   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.699659   70231 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.718571   70231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:47:43.729820   70231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:47:43.739921   70231 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:47:43.740010   70231 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:47:43.755562   70231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:47:43.768161   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:43.899531   70231 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:47:44.057564   70231 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:47:44.057649   70231 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:47:44.062669   70231 start.go:563] Will wait 60s for crictl version
	I0729 11:47:44.062751   70231 ssh_runner.go:195] Run: which crictl
	I0729 11:47:44.066815   70231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:47:44.104368   70231 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:47:44.104469   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.133158   70231 ssh_runner.go:195] Run: crio --version
	I0729 11:47:44.165813   70231 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:47:44.167192   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetIP
	I0729 11:47:44.170230   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170633   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:47:44.170664   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:47:44.170908   70231 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 11:47:44.175609   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:44.188628   70231 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:47:44.188748   70231 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:47:44.188811   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:44.229180   70231 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:47:44.229255   70231 ssh_runner.go:195] Run: which lz4
	I0729 11:47:44.233985   70231 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:47:44.238236   70231 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:47:44.238276   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:47:42.651248   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .Start
	I0729 11:47:42.651431   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring networks are active...
	I0729 11:47:42.652248   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network default is active
	I0729 11:47:42.652632   70480 main.go:141] libmachine: (old-k8s-version-188043) Ensuring network mk-old-k8s-version-188043 is active
	I0729 11:47:42.653108   70480 main.go:141] libmachine: (old-k8s-version-188043) Getting domain xml...
	I0729 11:47:42.653902   70480 main.go:141] libmachine: (old-k8s-version-188043) Creating domain...
	I0729 11:47:43.961872   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting to get IP...
	I0729 11:47:43.962928   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:43.963321   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:43.963424   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:43.963311   71329 retry.go:31] will retry after 266.698669ms: waiting for machine to come up
	I0729 11:47:44.231914   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.232480   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.232507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.232428   71329 retry.go:31] will retry after 363.884342ms: waiting for machine to come up
	I0729 11:47:44.598046   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:44.598507   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:44.598530   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:44.598465   71329 retry.go:31] will retry after 486.214401ms: waiting for machine to come up
	I0729 11:47:45.085968   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.086466   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.086511   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.086440   71329 retry.go:31] will retry after 451.181437ms: waiting for machine to come up
	I0729 11:47:44.508165   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:44.508190   69907 pod_ready.go:81] duration metric: took 1.007982605s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:44.508199   69907 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:46.515466   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:48.515797   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:45.761961   70231 crio.go:462] duration metric: took 1.528001524s to copy over tarball
	I0729 11:47:45.762103   70231 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:47:48.135637   70231 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.373497372s)
	I0729 11:47:48.135673   70231 crio.go:469] duration metric: took 2.373677697s to extract the tarball
	I0729 11:47:48.135683   70231 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:47:48.173007   70231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:47:48.222120   70231 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:47:48.222146   70231 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:47:48.222156   70231 kubeadm.go:934] updating node { 192.168.50.111 8444 v1.30.3 crio true true} ...
	I0729 11:47:48.222294   70231 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:47:48.222372   70231 ssh_runner.go:195] Run: crio config
	I0729 11:47:48.269094   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:48.269122   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:48.269149   70231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:47:48.269175   70231 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.111 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754486 NodeName:default-k8s-diff-port-754486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:47:48.269394   70231 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754486"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:47:48.269469   70231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:47:48.282748   70231 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:47:48.282830   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:47:48.292857   70231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 11:47:48.312165   70231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:47:48.332206   70231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 11:47:48.350385   70231 ssh_runner.go:195] Run: grep 192.168.50.111	control-plane.minikube.internal$ /etc/hosts
	I0729 11:47:48.354603   70231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:47:48.368166   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:47:48.505072   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:47:48.525399   70231 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486 for IP: 192.168.50.111
	I0729 11:47:48.525436   70231 certs.go:194] generating shared ca certs ...
	I0729 11:47:48.525457   70231 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:47:48.525622   70231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:47:48.525678   70231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:47:48.525691   70231 certs.go:256] generating profile certs ...
	I0729 11:47:48.525783   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/client.key
	I0729 11:47:48.525863   70231 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key.0ed2faa3
	I0729 11:47:48.525927   70231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key
	I0729 11:47:48.526076   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:47:48.526124   70231 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:47:48.526138   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:47:48.526169   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:47:48.526211   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:47:48.526241   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:47:48.526289   70231 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:47:48.527026   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:47:48.567953   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:47:48.605538   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:47:48.639615   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:47:48.678439   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 11:47:48.722664   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:47:48.757436   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:47:48.797241   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/default-k8s-diff-port-754486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:47:48.825666   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:47:48.856344   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:47:48.882046   70231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:47:48.909963   70231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:47:48.928513   70231 ssh_runner.go:195] Run: openssl version
	I0729 11:47:48.934467   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:47:48.945606   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950533   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.950585   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:47:48.957222   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:47:48.969043   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:47:48.981101   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986095   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.986161   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:47:48.992153   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:47:49.004358   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:47:49.016204   70231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021070   70231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.021131   70231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:47:49.027503   70231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:47:49.038545   70231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:47:49.043602   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:47:49.050327   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:47:49.056648   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:47:49.063624   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:47:49.071491   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:47:49.080125   70231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:47:49.086622   70231 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-754486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:47:49.086771   70231 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:47:49.086845   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.131483   70231 cri.go:89] found id: ""
	I0729 11:47:49.131580   70231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:47:49.143222   70231 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:47:49.143246   70231 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:47:49.143296   70231 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:47:49.155447   70231 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:49.156410   70231 kubeconfig.go:125] found "default-k8s-diff-port-754486" server: "https://192.168.50.111:8444"
	I0729 11:47:49.158477   70231 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:47:49.171515   70231 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.111
	I0729 11:47:49.171546   70231 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:47:49.171558   70231 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:47:49.171614   70231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:47:49.218584   70231 cri.go:89] found id: ""
	I0729 11:47:49.218656   70231 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:47:49.237934   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:47:49.249188   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:47:49.249213   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:47:49.249276   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:47:49.260033   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:47:49.260100   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:47:49.270588   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:47:49.280326   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:47:49.280422   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:47:49.291754   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.301918   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:47:49.302005   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:47:49.312861   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:47:49.323545   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:47:49.323614   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:47:49.335556   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:47:49.347161   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:49.467792   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:45.538886   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:45.539448   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:45.539474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:45.539387   71329 retry.go:31] will retry after 730.083235ms: waiting for machine to come up
	I0729 11:47:46.270923   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.271428   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.271457   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.271383   71329 retry.go:31] will retry after 699.351116ms: waiting for machine to come up
	I0729 11:47:46.973033   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:46.973661   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:46.973689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:46.973606   71329 retry.go:31] will retry after 804.992714ms: waiting for machine to come up
	I0729 11:47:47.780217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:47.780676   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:47.780727   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:47.780651   71329 retry.go:31] will retry after 1.242092613s: waiting for machine to come up
	I0729 11:47:49.024835   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:49.025362   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:49.025384   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:49.025314   71329 retry.go:31] will retry after 1.505477936s: waiting for machine to come up
	I0729 11:47:51.014115   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:53.015922   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:50.213363   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.427510   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.489221   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:50.574558   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:47:50.574648   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.075420   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.574892   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:51.612604   70231 api_server.go:72] duration metric: took 1.038045496s to wait for apiserver process to appear ...
	I0729 11:47:51.612635   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:47:51.612656   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:51.613131   70231 api_server.go:269] stopped: https://192.168.50.111:8444/healthz: Get "https://192.168.50.111:8444/healthz": dial tcp 192.168.50.111:8444: connect: connection refused
	I0729 11:47:52.113045   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.008828   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.008861   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.008877   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.080000   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:47:55.080047   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:47:55.113269   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.123263   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.123301   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:50.532816   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:50.533325   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:50.533354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:50.533285   71329 retry.go:31] will retry after 1.877421564s: waiting for machine to come up
	I0729 11:47:52.413018   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:52.413474   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:52.413500   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:52.413405   71329 retry.go:31] will retry after 2.017909532s: waiting for machine to come up
	I0729 11:47:54.432996   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:54.433478   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:54.433506   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:54.433444   71329 retry.go:31] will retry after 2.469357423s: waiting for machine to come up
	I0729 11:47:55.612793   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:55.617264   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:55.617299   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.112811   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.119382   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:47:56.119410   70231 api_server.go:103] status: https://192.168.50.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:47:56.612944   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:47:56.617383   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:47:56.623760   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:47:56.623786   70231 api_server.go:131] duration metric: took 5.011145377s to wait for apiserver health ...
	I0729 11:47:56.623795   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:47:56.623801   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:47:56.625608   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:47:55.018201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:57.514432   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.626901   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:47:56.638585   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:47:56.661631   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:47:56.671881   70231 system_pods.go:59] 8 kube-system pods found
	I0729 11:47:56.671908   70231 system_pods.go:61] "coredns-7db6d8ff4d-d4frq" [e495bc30-3c10-4d07-b488-4dbe9b0bfb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:47:56.671916   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [de3378a8-9a12-4c4b-a6e6-61b19950d5a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:47:56.671924   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [36c2cd1b-d9de-463e-b343-661d5f14f4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:47:56.671934   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [6239a1ee-9f7d-4d9b-9d70-5659c7b08fbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:47:56.671941   70231 system_pods.go:61] "kube-proxy-4bbt5" [4e672275-1afe-4f11-80e2-62aa220e9994] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:47:56.671947   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [81b7d1ed-0163-43fb-8111-048d48efa13c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:47:56.671954   70231 system_pods.go:61] "metrics-server-569cc877fc-v94xq" [a34d0cd0-1049-4cb4-ae4b-d0c8d34fda13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:47:56.671959   70231 system_pods.go:61] "storage-provisioner" [a10d68bf-f23d-4871-9041-1e66aa089342] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:47:56.671967   70231 system_pods.go:74] duration metric: took 10.316696ms to wait for pod list to return data ...
	I0729 11:47:56.671974   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:47:56.677342   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:47:56.677368   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:47:56.677380   70231 node_conditions.go:105] duration metric: took 5.400925ms to run NodePressure ...
	I0729 11:47:56.677400   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:47:56.985230   70231 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990270   70231 kubeadm.go:739] kubelet initialised
	I0729 11:47:56.990297   70231 kubeadm.go:740] duration metric: took 5.038002ms waiting for restarted kubelet to initialise ...
	I0729 11:47:56.990307   70231 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:47:56.995626   70231 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.002678   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002729   70231 pod_ready.go:81] duration metric: took 7.079039ms for pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.002742   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "coredns-7db6d8ff4d-d4frq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.002749   70231 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.007474   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007500   70231 pod_ready.go:81] duration metric: took 4.741617ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.007510   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.007516   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.012437   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012464   70231 pod_ready.go:81] duration metric: took 4.941759ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.012474   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.012480   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.065060   70231 pod_ready.go:97] node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065103   70231 pod_ready.go:81] duration metric: took 52.614137ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	E0729 11:47:57.065124   70231 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754486" hosting pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754486" has status "Ready":"False"
	I0729 11:47:57.065133   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465390   70231 pod_ready.go:92] pod "kube-proxy-4bbt5" in "kube-system" namespace has status "Ready":"True"
	I0729 11:47:57.465414   70231 pod_ready.go:81] duration metric: took 400.26956ms for pod "kube-proxy-4bbt5" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:57.465423   70231 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:47:59.475067   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:47:56.904201   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:47:56.904719   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | unable to find current IP address of domain old-k8s-version-188043 in network mk-old-k8s-version-188043
	I0729 11:47:56.904754   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | I0729 11:47:56.904677   71329 retry.go:31] will retry after 4.224733299s: waiting for machine to come up
	I0729 11:48:02.473126   69419 start.go:364] duration metric: took 55.472263119s to acquireMachinesLock for "no-preload-297799"
	I0729 11:48:02.473181   69419 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:48:02.473195   69419 fix.go:54] fixHost starting: 
	I0729 11:48:02.473581   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:48:02.473611   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:48:02.491458   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0729 11:48:02.491939   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:48:02.492393   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:48:02.492411   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:48:02.492790   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:48:02.492983   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:02.493133   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:48:02.494640   69419 fix.go:112] recreateIfNeeded on no-preload-297799: state=Stopped err=<nil>
	I0729 11:48:02.494666   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	W0729 11:48:02.494878   69419 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:48:02.497014   69419 out.go:177] * Restarting existing kvm2 VM for "no-preload-297799" ...
	I0729 11:47:59.514514   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.515573   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.516078   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:01.132270   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132669   70480 main.go:141] libmachine: (old-k8s-version-188043) Found IP for machine: 192.168.72.61
	I0729 11:48:01.132695   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has current primary IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.132702   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserving static IP address...
	I0729 11:48:01.133140   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.133173   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | skip adding static IP to network mk-old-k8s-version-188043 - found existing host DHCP lease matching {name: "old-k8s-version-188043", mac: "52:54:00:0b:d7:0d", ip: "192.168.72.61"}
	I0729 11:48:01.133189   70480 main.go:141] libmachine: (old-k8s-version-188043) Reserved static IP address: 192.168.72.61
	I0729 11:48:01.133203   70480 main.go:141] libmachine: (old-k8s-version-188043) Waiting for SSH to be available...
	I0729 11:48:01.133217   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Getting to WaitForSSH function...
	I0729 11:48:01.135427   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135759   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.135786   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.135866   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH client type: external
	I0729 11:48:01.135894   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa (-rw-------)
	I0729 11:48:01.135931   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:01.135949   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | About to run SSH command:
	I0729 11:48:01.135986   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | exit 0
	I0729 11:48:01.262756   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:01.263165   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetConfigRaw
	I0729 11:48:01.263828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.266322   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266662   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.266689   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.266930   70480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/config.json ...
	I0729 11:48:01.267115   70480 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:01.267132   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:01.267357   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.269973   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270331   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.270354   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.270502   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.270686   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270863   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.270992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.271183   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.271391   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.271405   70480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:01.383463   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:01.383492   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383771   70480 buildroot.go:166] provisioning hostname "old-k8s-version-188043"
	I0729 11:48:01.383795   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.383992   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.387076   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387411   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.387449   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.387583   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.387776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.387929   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.388052   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.388237   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.388396   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.388409   70480 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-188043 && echo "old-k8s-version-188043" | sudo tee /etc/hostname
	I0729 11:48:01.519219   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-188043
	
	I0729 11:48:01.519252   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.521972   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522356   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.522385   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.522533   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.522755   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.522955   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.523074   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.523276   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.523452   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.523470   70480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-188043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-188043/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-188043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:01.644387   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:01.644421   70480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:01.644468   70480 buildroot.go:174] setting up certificates
	I0729 11:48:01.644480   70480 provision.go:84] configureAuth start
	I0729 11:48:01.644499   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetMachineName
	I0729 11:48:01.644781   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:01.647721   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648133   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.648162   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.648322   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.650422   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.650857   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.650883   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.651028   70480 provision.go:143] copyHostCerts
	I0729 11:48:01.651088   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:01.651101   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:01.651160   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:01.651249   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:01.651257   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:01.651277   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:01.651329   70480 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:01.651336   70480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:01.651352   70480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:01.651408   70480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-188043 san=[127.0.0.1 192.168.72.61 localhost minikube old-k8s-version-188043]
	I0729 11:48:01.754387   70480 provision.go:177] copyRemoteCerts
	I0729 11:48:01.754442   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:01.754468   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.757420   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.757770   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.757803   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.758031   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.758220   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.758416   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.758574   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:01.845306   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:01.872221   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 11:48:01.897935   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:01.924742   70480 provision.go:87] duration metric: took 280.248018ms to configureAuth
	I0729 11:48:01.924780   70480 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:01.924957   70480 config.go:182] Loaded profile config "old-k8s-version-188043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:48:01.925042   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:01.927450   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.927780   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:01.927949   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:01.927956   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:01.928160   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928344   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:01.928511   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:01.928677   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:01.928831   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:01.928850   70480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:02.213344   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:02.213375   70480 machine.go:97] duration metric: took 946.247614ms to provisionDockerMachine
	I0729 11:48:02.213405   70480 start.go:293] postStartSetup for "old-k8s-version-188043" (driver="kvm2")
	I0729 11:48:02.213422   70480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:02.213469   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.213811   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:02.213869   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.216897   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217219   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.217253   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.217388   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.217603   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.217776   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.217957   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.305894   70480 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:02.310875   70480 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:02.310907   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:02.310982   70480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:02.311105   70480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:02.311264   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:02.321616   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:02.349732   70480 start.go:296] duration metric: took 136.310898ms for postStartSetup
	I0729 11:48:02.349788   70480 fix.go:56] duration metric: took 19.724598498s for fixHost
	I0729 11:48:02.349828   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.352855   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353194   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.353226   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.353348   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.353575   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353802   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.353983   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.354172   70480 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:02.354428   70480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.61 22 <nil> <nil>}
	I0729 11:48:02.354445   70480 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:02.472983   70480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253682.444336856
	
	I0729 11:48:02.473008   70480 fix.go:216] guest clock: 1722253682.444336856
	I0729 11:48:02.473017   70480 fix.go:229] Guest: 2024-07-29 11:48:02.444336856 +0000 UTC Remote: 2024-07-29 11:48:02.349793034 +0000 UTC m=+207.164903125 (delta=94.543822ms)
	I0729 11:48:02.473042   70480 fix.go:200] guest clock delta is within tolerance: 94.543822ms
	I0729 11:48:02.473049   70480 start.go:83] releasing machines lock for "old-k8s-version-188043", held for 19.847898477s
	I0729 11:48:02.473077   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.473414   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:02.476555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.476980   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.477010   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.477343   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.477892   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478095   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .DriverName
	I0729 11:48:02.478187   70480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:02.478230   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.478285   70480 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:02.478331   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHHostname
	I0729 11:48:02.481065   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481223   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481484   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481555   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481626   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.481665   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:02.481705   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:02.481845   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.481958   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHPort
	I0729 11:48:02.482032   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482114   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHKeyPath
	I0729 11:48:02.482302   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetSSHUsername
	I0729 11:48:02.482316   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.482466   70480 sshutil.go:53] new ssh client: &{IP:192.168.72.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/old-k8s-version-188043/id_rsa Username:docker}
	I0729 11:48:02.569312   70480 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:02.593778   70480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:02.744117   70480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:02.752292   70480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:02.752380   70480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:02.774488   70480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:02.774515   70480 start.go:495] detecting cgroup driver to use...
	I0729 11:48:02.774581   70480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:02.793491   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:02.814235   70480 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:02.814294   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:02.832545   70480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:02.848433   70480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:02.980201   70480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:03.163341   70480 docker.go:233] disabling docker service ...
	I0729 11:48:03.163420   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:03.178758   70480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:03.197879   70480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:03.331372   70480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:03.462987   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:03.480583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:03.504239   70480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 11:48:03.504312   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.518440   70480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:03.518494   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.532072   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.543435   70480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:03.555232   70480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:03.568372   70480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:03.582423   70480 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:03.582482   70480 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:03.601891   70480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:03.614380   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:03.741008   70480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:03.896807   70480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:03.896878   70480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:03.903541   70480 start.go:563] Will wait 60s for crictl version
	I0729 11:48:03.903604   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:03.908408   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:03.952850   70480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:03.952946   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:03.984204   70480 ssh_runner.go:195] Run: crio --version
	I0729 11:48:04.018650   70480 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 11:48:02.498447   69419 main.go:141] libmachine: (no-preload-297799) Calling .Start
	I0729 11:48:02.498626   69419 main.go:141] libmachine: (no-preload-297799) Ensuring networks are active...
	I0729 11:48:02.499540   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network default is active
	I0729 11:48:02.499967   69419 main.go:141] libmachine: (no-preload-297799) Ensuring network mk-no-preload-297799 is active
	I0729 11:48:02.500446   69419 main.go:141] libmachine: (no-preload-297799) Getting domain xml...
	I0729 11:48:02.501250   69419 main.go:141] libmachine: (no-preload-297799) Creating domain...
	I0729 11:48:03.852498   69419 main.go:141] libmachine: (no-preload-297799) Waiting to get IP...
	I0729 11:48:03.853523   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:03.853951   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:03.854006   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:03.853917   71505 retry.go:31] will retry after 199.060788ms: waiting for machine to come up
	I0729 11:48:04.054348   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.054940   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.054968   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.054888   71505 retry.go:31] will retry after 285.962971ms: waiting for machine to come up
	I0729 11:48:04.342491   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.343050   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.343075   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.343003   71505 retry.go:31] will retry after 363.613745ms: waiting for machine to come up
	I0729 11:48:01.973091   70231 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:03.972466   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:03.972492   70231 pod_ready.go:81] duration metric: took 6.507061375s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:03.972504   70231 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:04.020067   70480 main.go:141] libmachine: (old-k8s-version-188043) Calling .GetIP
	I0729 11:48:04.023182   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023542   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:d7:0d", ip: ""} in network mk-old-k8s-version-188043: {Iface:virbr4 ExpiryTime:2024-07-29 12:47:54 +0000 UTC Type:0 Mac:52:54:00:0b:d7:0d Iaid: IPaddr:192.168.72.61 Prefix:24 Hostname:old-k8s-version-188043 Clientid:01:52:54:00:0b:d7:0d}
	I0729 11:48:04.023571   70480 main.go:141] libmachine: (old-k8s-version-188043) DBG | domain old-k8s-version-188043 has defined IP address 192.168.72.61 and MAC address 52:54:00:0b:d7:0d in network mk-old-k8s-version-188043
	I0729 11:48:04.023796   70480 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:04.028450   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:04.042324   70480 kubeadm.go:883] updating cluster {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:04.042474   70480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 11:48:04.042540   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:04.092644   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:04.092699   70480 ssh_runner.go:195] Run: which lz4
	I0729 11:48:04.096834   70480 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:48:04.101297   70480 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:48:04.101328   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 11:48:05.518740   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:08.014306   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:04.708829   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:04.709447   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:04.709480   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:04.709349   71505 retry.go:31] will retry after 458.384125ms: waiting for machine to come up
	I0729 11:48:05.169214   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.169896   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.169930   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.169845   71505 retry.go:31] will retry after 647.103993ms: waiting for machine to come up
	I0729 11:48:05.818415   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:05.819017   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:05.819043   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:05.818969   71505 retry.go:31] will retry after 857.973397ms: waiting for machine to come up
	I0729 11:48:06.678181   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:06.678732   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:06.678756   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:06.678668   71505 retry.go:31] will retry after 928.705904ms: waiting for machine to come up
	I0729 11:48:07.609326   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:07.609866   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:07.609890   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:07.609822   71505 retry.go:31] will retry after 1.262269934s: waiting for machine to come up
	I0729 11:48:08.874373   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:08.874820   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:08.874850   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:08.874758   71505 retry.go:31] will retry after 1.824043731s: waiting for machine to come up
	I0729 11:48:05.980579   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:07.982513   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:05.929023   70480 crio.go:462] duration metric: took 1.832213096s to copy over tarball
	I0729 11:48:05.929116   70480 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:48:09.096321   70480 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.167180641s)
	I0729 11:48:09.096346   70480 crio.go:469] duration metric: took 3.16729016s to extract the tarball
	I0729 11:48:09.096353   70480 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:48:09.154049   70480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:09.193067   70480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 11:48:09.193104   70480 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:09.193246   70480 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.193211   70480 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.193280   70480 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 11:48:09.193282   70480 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.193298   70480 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.193217   70480 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.193270   70480 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.193261   70480 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.194797   70480 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.194815   70480 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.194846   70480 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:09.194885   70480 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.194779   70480 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 11:48:09.194789   70480 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.195165   70480 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.412658   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 11:48:09.423832   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.465209   70480 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 11:48:09.465256   70480 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 11:48:09.465312   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.475896   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 11:48:09.475946   70480 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 11:48:09.475983   70480 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.476028   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.511579   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 11:48:09.511606   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 11:48:09.547233   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 11:48:09.554773   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.566944   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.567736   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.574369   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.587705   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.650871   70480 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 11:48:09.650920   70480 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.650969   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692828   70480 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 11:48:09.692876   70480 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 11:48:09.692913   70480 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.692884   70480 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.692968   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.692989   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.711985   70480 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 11:48:09.712027   70480 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.712073   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.713847   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 11:48:09.713883   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 11:48:09.713918   70480 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 11:48:09.713950   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 11:48:09.713959   70480 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.713997   70480 ssh_runner.go:195] Run: which crictl
	I0729 11:48:09.718719   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 11:48:09.820952   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 11:48:09.820988   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 11:48:09.821051   70480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 11:48:09.821058   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 11:48:09.821142   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 11:48:09.861235   70480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 11:48:10.107019   70480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:10.014549   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.016206   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.701733   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:10.702238   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:10.702283   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:10.702199   71505 retry.go:31] will retry after 2.128592394s: waiting for machine to come up
	I0729 11:48:12.832803   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:12.833342   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:12.833364   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:12.833290   71505 retry.go:31] will retry after 2.45224359s: waiting for machine to come up
	I0729 11:48:10.479461   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:12.482426   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:14.978814   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:10.251097   70480 cache_images.go:92] duration metric: took 1.057971213s to LoadCachedImages
	W0729 11:48:10.251194   70480 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0729 11:48:10.251210   70480 kubeadm.go:934] updating node { 192.168.72.61 8443 v1.20.0 crio true true} ...
	I0729 11:48:10.251341   70480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-188043 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:10.251447   70480 ssh_runner.go:195] Run: crio config
	I0729 11:48:10.310573   70480 cni.go:84] Creating CNI manager for ""
	I0729 11:48:10.310594   70480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:10.310601   70480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:10.310618   70480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.61 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-188043 NodeName:old-k8s-version-188043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 11:48:10.310786   70480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-188043"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:10.310847   70480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 11:48:10.321378   70480 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:10.321463   70480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:10.331285   70480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 11:48:10.348814   70480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:48:10.368593   70480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 11:48:10.387619   70480 ssh_runner.go:195] Run: grep 192.168.72.61	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:10.392096   70480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:10.405499   70480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:10.532293   70480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:10.549883   70480 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043 for IP: 192.168.72.61
	I0729 11:48:10.549910   70480 certs.go:194] generating shared ca certs ...
	I0729 11:48:10.549930   70480 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:10.550110   70480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:10.550166   70480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:10.550180   70480 certs.go:256] generating profile certs ...
	I0729 11:48:10.550299   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/client.key
	I0729 11:48:10.550376   70480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key.2bbdfef4
	I0729 11:48:10.550428   70480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key
	I0729 11:48:10.550564   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:10.550604   70480 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:10.550617   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:10.550648   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:10.550678   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:10.550730   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:10.550787   70480 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:10.551421   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:10.588571   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:10.644840   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:10.697757   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:10.755085   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 11:48:10.800901   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:10.834650   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:10.866662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/old-k8s-version-188043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:10.895657   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:10.923565   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:10.948662   70480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:10.976094   70480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:10.998391   70480 ssh_runner.go:195] Run: openssl version
	I0729 11:48:11.006024   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:11.019080   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024428   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.024498   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:11.031468   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:11.043919   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:11.055933   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061208   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.061284   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:11.067590   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:11.079323   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:11.091404   70480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096768   70480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.096838   70480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:11.103571   70480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:11.116286   70480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:11.121569   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:11.128426   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:11.134679   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:11.141595   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:11.148605   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:11.155402   70480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:11.162290   70480 kubeadm.go:392] StartCluster: {Name:old-k8s-version-188043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-188043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:11.162394   70480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:11.162441   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.206880   70480 cri.go:89] found id: ""
	I0729 11:48:11.206962   70480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:11.218154   70480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:11.218187   70480 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:11.218253   70480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:11.228734   70480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:11.229980   70480 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-188043" does not appear in /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:48:11.230942   70480 kubeconfig.go:62] /home/jenkins/minikube-integration/19337-3845/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-188043" cluster setting kubeconfig missing "old-k8s-version-188043" context setting]
	I0729 11:48:11.231876   70480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:11.347256   70480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:11.358515   70480 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.61
	I0729 11:48:11.358654   70480 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:11.358668   70480 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:11.358753   70480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:11.402396   70480 cri.go:89] found id: ""
	I0729 11:48:11.402470   70480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:11.420623   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:11.431963   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:11.431989   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:11.432060   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:11.442517   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:11.442584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:11.454407   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:11.465534   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:11.465607   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:11.477716   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.489553   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:11.489625   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:11.501776   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:11.514863   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:11.514931   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:11.526206   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:11.536583   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:11.674846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:12.763352   70480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.088463809s)
	I0729 11:48:12.763396   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.039246   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.168621   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:13.295910   70480 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:13.296008   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:13.797084   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.296523   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.796520   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:14.515092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:17.014806   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.287937   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:15.288420   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:15.288447   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:15.288378   71505 retry.go:31] will retry after 2.298011171s: waiting for machine to come up
	I0729 11:48:17.587882   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:17.588283   69419 main.go:141] libmachine: (no-preload-297799) DBG | unable to find current IP address of domain no-preload-297799 in network mk-no-preload-297799
	I0729 11:48:17.588317   69419 main.go:141] libmachine: (no-preload-297799) DBG | I0729 11:48:17.588242   71505 retry.go:31] will retry after 3.770149633s: waiting for machine to come up
	I0729 11:48:16.979006   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:18.979673   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:15.296251   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:15.796738   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.296361   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:16.796755   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.296237   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:17.796188   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.296099   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:18.796433   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.296864   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.796931   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:19.514721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.515056   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.515218   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:21.363217   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363766   69419 main.go:141] libmachine: (no-preload-297799) Found IP for machine: 192.168.39.120
	I0729 11:48:21.363823   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has current primary IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.363832   69419 main.go:141] libmachine: (no-preload-297799) Reserving static IP address...
	I0729 11:48:21.364272   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.364319   69419 main.go:141] libmachine: (no-preload-297799) DBG | skip adding static IP to network mk-no-preload-297799 - found existing host DHCP lease matching {name: "no-preload-297799", mac: "52:54:00:4c:20:e4", ip: "192.168.39.120"}
	I0729 11:48:21.364334   69419 main.go:141] libmachine: (no-preload-297799) Reserved static IP address: 192.168.39.120
	I0729 11:48:21.364351   69419 main.go:141] libmachine: (no-preload-297799) Waiting for SSH to be available...
	I0729 11:48:21.364386   69419 main.go:141] libmachine: (no-preload-297799) DBG | Getting to WaitForSSH function...
	I0729 11:48:21.366601   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.366955   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.366998   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.367110   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH client type: external
	I0729 11:48:21.367157   69419 main.go:141] libmachine: (no-preload-297799) DBG | Using SSH private key: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa (-rw-------)
	I0729 11:48:21.367203   69419 main.go:141] libmachine: (no-preload-297799) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:48:21.367222   69419 main.go:141] libmachine: (no-preload-297799) DBG | About to run SSH command:
	I0729 11:48:21.367233   69419 main.go:141] libmachine: (no-preload-297799) DBG | exit 0
	I0729 11:48:21.494963   69419 main.go:141] libmachine: (no-preload-297799) DBG | SSH cmd err, output: <nil>: 
	I0729 11:48:21.495323   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetConfigRaw
	I0729 11:48:21.495901   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.498624   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499005   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.499033   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.499332   69419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/config.json ...
	I0729 11:48:21.499542   69419 machine.go:94] provisionDockerMachine start ...
	I0729 11:48:21.499561   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:21.499749   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.501857   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502237   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.502259   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.502360   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.502527   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502693   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.502852   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.503009   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.503209   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.503226   69419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:48:21.614994   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 11:48:21.615026   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615271   69419 buildroot.go:166] provisioning hostname "no-preload-297799"
	I0729 11:48:21.615299   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.615483   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.617734   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618050   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.618082   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.618192   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.618378   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618539   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.618640   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.618818   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.619004   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.619019   69419 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-297799 && echo "no-preload-297799" | sudo tee /etc/hostname
	I0729 11:48:21.747538   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-297799
	
	I0729 11:48:21.747567   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.750275   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750618   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.750649   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.750791   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:21.751003   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751179   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:21.751302   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:21.751508   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:21.751695   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:21.751716   69419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-297799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-297799/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-297799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:48:21.877638   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:48:21.877665   69419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19337-3845/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-3845/.minikube}
	I0729 11:48:21.877688   69419 buildroot.go:174] setting up certificates
	I0729 11:48:21.877699   69419 provision.go:84] configureAuth start
	I0729 11:48:21.877710   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetMachineName
	I0729 11:48:21.877988   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:21.880318   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880703   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.880730   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.880918   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:21.883184   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883495   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:21.883525   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:21.883645   69419 provision.go:143] copyHostCerts
	I0729 11:48:21.883693   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem, removing ...
	I0729 11:48:21.883702   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem
	I0729 11:48:21.883757   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/ca.pem (1082 bytes)
	I0729 11:48:21.883845   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem, removing ...
	I0729 11:48:21.883852   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem
	I0729 11:48:21.883872   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/cert.pem (1123 bytes)
	I0729 11:48:21.883925   69419 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem, removing ...
	I0729 11:48:21.883932   69419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem
	I0729 11:48:21.883948   69419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-3845/.minikube/key.pem (1675 bytes)
	I0729 11:48:21.883992   69419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem org=jenkins.no-preload-297799 san=[127.0.0.1 192.168.39.120 localhost minikube no-preload-297799]
	I0729 11:48:22.283775   69419 provision.go:177] copyRemoteCerts
	I0729 11:48:22.283828   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:48:22.283854   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.286584   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.286954   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.286981   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.287114   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.287333   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.287503   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.287643   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.373551   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:48:22.401345   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 11:48:22.427243   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:48:22.452826   69419 provision.go:87] duration metric: took 575.112676ms to configureAuth
	I0729 11:48:22.452864   69419 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:48:22.453068   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:48:22.453140   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.455748   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456205   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.456237   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.456444   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.456664   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456824   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.456980   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.457113   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.457317   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.457340   69419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:48:22.736637   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:48:22.736667   69419 machine.go:97] duration metric: took 1.237111694s to provisionDockerMachine
	I0729 11:48:22.736682   69419 start.go:293] postStartSetup for "no-preload-297799" (driver="kvm2")
	I0729 11:48:22.736697   69419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:48:22.736716   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.737054   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:48:22.737080   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.739895   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740266   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.740299   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.740437   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.740660   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.740810   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.740981   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.825483   69419 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:48:22.829745   69419 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:48:22.829765   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/addons for local assets ...
	I0729 11:48:22.829844   69419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-3845/.minikube/files for local assets ...
	I0729 11:48:22.829961   69419 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem -> 110642.pem in /etc/ssl/certs
	I0729 11:48:22.830063   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:48:22.839702   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:22.864154   69419 start.go:296] duration metric: took 127.451011ms for postStartSetup
	I0729 11:48:22.864200   69419 fix.go:56] duration metric: took 20.391004348s for fixHost
	I0729 11:48:22.864225   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.867047   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867522   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.867547   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.867685   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.867897   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868100   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.868278   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.868442   69419 main.go:141] libmachine: Using SSH client type: native
	I0729 11:48:22.868619   69419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 11:48:22.868634   69419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:48:22.979862   69419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253702.953940258
	
	I0729 11:48:22.979883   69419 fix.go:216] guest clock: 1722253702.953940258
	I0729 11:48:22.979892   69419 fix.go:229] Guest: 2024-07-29 11:48:22.953940258 +0000 UTC Remote: 2024-07-29 11:48:22.864205522 +0000 UTC m=+358.454662216 (delta=89.734736ms)
	I0729 11:48:22.979909   69419 fix.go:200] guest clock delta is within tolerance: 89.734736ms
	I0729 11:48:22.979916   69419 start.go:83] releasing machines lock for "no-preload-297799", held for 20.506763382s
	I0729 11:48:22.979934   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.980178   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:22.983034   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983379   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.983407   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.983569   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984174   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984345   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:48:22.984440   69419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:48:22.984481   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.984593   69419 ssh_runner.go:195] Run: cat /version.json
	I0729 11:48:22.984620   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:48:22.987121   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987251   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987503   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987530   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987631   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:22.987653   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987657   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:22.987846   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.987853   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:48:22.987984   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:48:22.988013   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988070   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:48:22.988193   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:22.988190   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:48:23.101778   69419 ssh_runner.go:195] Run: systemctl --version
	I0729 11:48:23.108052   69419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:48:23.255523   69419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:48:23.261797   69419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:48:23.261872   69419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:48:23.279975   69419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:48:23.280003   69419 start.go:495] detecting cgroup driver to use...
	I0729 11:48:23.280070   69419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:48:23.296550   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:48:23.312947   69419 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:48:23.313014   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:48:23.327611   69419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:48:23.341549   69419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:48:23.465776   69419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:48:23.613763   69419 docker.go:233] disabling docker service ...
	I0729 11:48:23.613827   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:48:23.628485   69419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:48:23.641792   69419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:48:23.775749   69419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:48:23.912809   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:48:23.927782   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:48:23.947081   69419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 11:48:23.947153   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.957920   69419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:48:23.958002   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.968380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.979429   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:23.990529   69419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:48:24.001380   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.012490   69419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.031852   69419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:48:24.042914   69419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:48:24.052901   69419 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:48:24.052958   69419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:48:24.065797   69419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:48:24.075298   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:24.212796   69419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:48:24.364082   69419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:48:24.364169   69419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:48:24.369778   69419 start.go:563] Will wait 60s for crictl version
	I0729 11:48:24.369838   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.373750   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:48:24.417141   69419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:48:24.417232   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.447170   69419 ssh_runner.go:195] Run: crio --version
	I0729 11:48:24.491940   69419 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 11:48:21.481453   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:23.482213   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:20.296052   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:20.796633   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.296412   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:21.797025   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.296524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:22.796719   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.296741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:23.796133   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.296709   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:24.796699   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.515715   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:27.515900   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:24.493306   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetIP
	I0729 11:48:24.495927   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496432   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:48:24.496479   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:48:24.496678   69419 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:48:24.501092   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:24.516305   69419 kubeadm.go:883] updating cluster {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:48:24.516452   69419 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 11:48:24.516524   69419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:48:24.558195   69419 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 11:48:24.558221   69419 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 11:48:24.558261   69419 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.558295   69419 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.558340   69419 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.558344   69419 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.558377   69419 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 11:48:24.558394   69419 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.558441   69419 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.558359   69419 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 11:48:24.559657   69419 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:24.559681   69419 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.559700   69419 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.559628   69419 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.559630   69419 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.559635   69419 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.559896   69419 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.717545   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.722347   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.724891   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.736099   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.738159   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.746232   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 11:48:24.754163   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:24.781677   69419 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 11:48:24.781726   69419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:24.781777   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.850443   69419 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 11:48:24.850478   69419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:24.850527   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.872953   69419 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 11:48:24.872991   69419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:24.873031   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908765   69419 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 11:48:24.908814   69419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:24.908869   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:24.908933   69419 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 11:48:24.908969   69419 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:24.909008   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006764   69419 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 11:48:25.006808   69419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.006862   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.006897   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:48:25.006908   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:48:25.006942   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:48:25.006982   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:48:25.007025   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 11:48:25.108737   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 11:48:25.108786   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:48:25.108843   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.109411   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109455   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109473   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 11:48:25.109491   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:25.109530   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:25.109543   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:25.124038   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.124154   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:25.161374   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161395   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 11:48:25.161411   69419 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161435   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161455   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 11:48:25.161483   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:25.161495   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 11:48:25.161463   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 11:48:25.161532   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 11:48:25.430934   69419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983350   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (3.821838647s)
	I0729 11:48:28.983392   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 11:48:28.983487   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.822003707s)
	I0729 11:48:28.983512   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 11:48:28.983529   69419 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.552560815s)
	I0729 11:48:28.983541   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983566   69419 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 11:48:28.983600   69419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:28.983615   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 11:48:28.983636   69419 ssh_runner.go:195] Run: which crictl
	I0729 11:48:25.981755   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:28.481454   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:25.296898   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:25.796712   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.297094   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:26.796384   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.296247   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:27.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.296890   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:28.796799   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.296947   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:29.796825   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.015895   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:32.537283   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.876700   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.893055249s)
	I0729 11:48:30.876727   69419 ssh_runner.go:235] Completed: which crictl: (1.893072604s)
	I0729 11:48:30.876791   69419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:48:30.876737   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 11:48:30.876867   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.876921   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 11:48:30.925907   69419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 11:48:30.926007   69419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:32.689310   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.812361674s)
	I0729 11:48:32.689348   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 11:48:32.689380   69419 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689330   69419 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.763306985s)
	I0729 11:48:32.689433   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 11:48:32.689437   69419 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 11:48:30.979444   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:33.480260   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:30.297007   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:30.797055   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.296172   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:31.796379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.296834   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:32.796689   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.296129   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:33.796275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.297038   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:34.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.014380   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:37.015050   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:34.662663   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.973206225s)
	I0729 11:48:34.662715   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 11:48:34.662742   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:34.662794   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 11:48:36.619459   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.956638565s)
	I0729 11:48:36.619486   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 11:48:36.619509   69419 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:36.619565   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 11:48:38.577482   69419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.95789492s)
	I0729 11:48:38.577507   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 11:48:38.577529   69419 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:38.577568   69419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 11:48:39.229623   69419 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19337-3845/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 11:48:39.229672   69419 cache_images.go:123] Successfully loaded all cached images
	I0729 11:48:39.229679   69419 cache_images.go:92] duration metric: took 14.67144672s to LoadCachedImages
	I0729 11:48:39.229693   69419 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.0-beta.0 crio true true} ...
	I0729 11:48:39.229817   69419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-297799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:48:39.229881   69419 ssh_runner.go:195] Run: crio config
	I0729 11:48:39.275907   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:39.275926   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:39.275934   69419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:48:39.275954   69419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-297799 NodeName:no-preload-297799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:48:39.276122   69419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-297799"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:48:39.276192   69419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 11:48:39.286552   69419 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:48:39.286610   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:48:39.296058   69419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 11:48:39.318154   69419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 11:48:39.335437   69419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 11:48:39.354036   69419 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I0729 11:48:39.358009   69419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:48:39.370253   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:48:35.994913   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:38.483330   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:35.297061   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:35.796828   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.296156   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:36.796245   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.297045   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:37.796103   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.296453   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:38.796146   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.296670   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.796357   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:39.016488   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:41.515245   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:39.512699   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:48:39.531458   69419 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799 for IP: 192.168.39.120
	I0729 11:48:39.531482   69419 certs.go:194] generating shared ca certs ...
	I0729 11:48:39.531502   69419 certs.go:226] acquiring lock for ca certs: {Name:mk81eab08bc9d89e797c2c52fbc03f1193fd667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:48:39.531676   69419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key
	I0729 11:48:39.531730   69419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key
	I0729 11:48:39.531743   69419 certs.go:256] generating profile certs ...
	I0729 11:48:39.531841   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/client.key
	I0729 11:48:39.531928   69419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key.7b715e25
	I0729 11:48:39.531975   69419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key
	I0729 11:48:39.532117   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem (1338 bytes)
	W0729 11:48:39.532153   69419 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064_empty.pem, impossibly tiny 0 bytes
	I0729 11:48:39.532167   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:48:39.532197   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:48:39.532227   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:48:39.532258   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/certs/key.pem (1675 bytes)
	I0729 11:48:39.532304   69419 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem (1708 bytes)
	I0729 11:48:39.532940   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:48:39.571271   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:48:39.596824   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:48:39.622112   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:48:39.655054   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 11:48:39.693252   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:48:39.717845   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:48:39.746725   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/no-preload-297799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:48:39.772098   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:48:39.798075   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/certs/11064.pem --> /usr/share/ca-certificates/11064.pem (1338 bytes)
	I0729 11:48:39.824675   69419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/ssl/certs/110642.pem --> /usr/share/ca-certificates/110642.pem (1708 bytes)
	I0729 11:48:39.849863   69419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:48:39.867759   69419 ssh_runner.go:195] Run: openssl version
	I0729 11:48:39.874159   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11064.pem && ln -fs /usr/share/ca-certificates/11064.pem /etc/ssl/certs/11064.pem"
	I0729 11:48:39.885596   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890166   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:34 /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.890229   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11064.pem
	I0729 11:48:39.896413   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11064.pem /etc/ssl/certs/51391683.0"
	I0729 11:48:39.907803   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110642.pem && ln -fs /usr/share/ca-certificates/110642.pem /etc/ssl/certs/110642.pem"
	I0729 11:48:39.920270   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925216   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:34 /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.925279   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110642.pem
	I0729 11:48:39.931316   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110642.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:48:39.942774   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:48:39.954592   69419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959366   69419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.959422   69419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:48:39.965437   69419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:48:39.976951   69419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:48:39.983054   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:48:39.989909   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:48:39.995930   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:48:40.002178   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:48:40.008426   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:48:40.014841   69419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:48:40.021729   69419 kubeadm.go:392] StartCluster: {Name:no-preload-297799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-297799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:48:40.021848   69419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:48:40.021908   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.075370   69419 cri.go:89] found id: ""
	I0729 11:48:40.075473   69419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:48:40.086268   69419 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:48:40.086293   69419 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:48:40.086367   69419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:48:40.097168   69419 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:48:40.098369   69419 kubeconfig.go:125] found "no-preload-297799" server: "https://192.168.39.120:8443"
	I0729 11:48:40.100676   69419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:48:40.111832   69419 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I0729 11:48:40.111874   69419 kubeadm.go:1160] stopping kube-system containers ...
	I0729 11:48:40.111885   69419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 11:48:40.111927   69419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:48:40.151936   69419 cri.go:89] found id: ""
	I0729 11:48:40.152000   69419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 11:48:40.170773   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:48:40.181342   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:48:40.181363   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:48:40.181408   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:48:40.190984   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:48:40.191052   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:48:40.200668   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:48:40.209597   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:48:40.209645   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:48:40.219194   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.228788   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:48:40.228861   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:48:40.238965   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:48:40.248308   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:48:40.248390   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:48:40.257904   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:48:40.267645   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:40.379761   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.272628   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.487426   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.563792   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:41.657159   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:48:41.657265   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.158209   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.657442   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.712325   69419 api_server.go:72] duration metric: took 1.055172636s to wait for apiserver process to appear ...
	I0729 11:48:42.712357   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:48:42.712378   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:40.978804   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:42.979615   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:40.296481   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:40.796161   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.296479   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:41.796634   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.296314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:42.796986   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.297060   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:43.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.296048   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:44.796488   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.619558   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.619623   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.619639   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.629929   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.629961   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:45.713181   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:45.764383   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 11:48:45.764415   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 11:48:46.213129   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.217584   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.217613   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:46.713358   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:46.719382   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 11:48:46.719421   69419 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 11:48:47.212915   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:48:47.218414   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:48:47.230158   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:48:47.230187   69419 api_server.go:131] duration metric: took 4.517823741s to wait for apiserver health ...
	I0729 11:48:47.230197   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:48:47.230203   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:48:47.232409   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:48:44.015604   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:46.514213   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:48.514660   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.233803   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:48:47.254784   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:48:47.278258   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:48:47.307307   69419 system_pods.go:59] 8 kube-system pods found
	I0729 11:48:47.307354   69419 system_pods.go:61] "coredns-5cfdc65f69-qz5f7" [12c37abb-1db8-4c96-8bf7-be9487c821df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 11:48:47.307368   69419 system_pods.go:61] "etcd-no-preload-297799" [95565d29-e8c5-4f33-84d9-a2604d25440d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 11:48:47.307380   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [870e0ec0-87db-4fee-b8ba-d08654d036de] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 11:48:47.307389   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [12bf09f7-8084-47fb-b268-c9eccf906ce8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 11:48:47.307397   69419 system_pods.go:61] "kube-proxy-ggh4w" [5455f099-4470-4551-864e-5e855b77f94f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 11:48:47.307405   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [e88dae86-cfc6-456f-b14a-ebaaeac5f758] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 11:48:47.307416   69419 system_pods.go:61] "metrics-server-78fcd8795b-x4t76" [874f9fbe-8ded-48ba-993d-53cbded78379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:48:47.307423   69419 system_pods.go:61] "storage-provisioner" [8ca54feb-faf5-4e75-aef5-b7c57b89c429] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 11:48:47.307434   69419 system_pods.go:74] duration metric: took 29.153842ms to wait for pod list to return data ...
	I0729 11:48:47.307447   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:48:47.324625   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:48:47.324677   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:48:47.324691   69419 node_conditions.go:105] duration metric: took 17.237885ms to run NodePressure ...
	I0729 11:48:47.324711   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 11:48:47.612726   69419 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619335   69419 kubeadm.go:739] kubelet initialised
	I0729 11:48:47.619356   69419 kubeadm.go:740] duration metric: took 6.608982ms waiting for restarted kubelet to initialise ...
	I0729 11:48:47.619364   69419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:48:47.625462   69419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:45.479610   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:47.481743   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.978596   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:45.297079   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:45.796411   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.297077   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:46.796676   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.296378   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:47.796359   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.296252   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:48.796407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.296669   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:49.796307   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.516689   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:53.016717   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:49.632321   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.131647   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:52.633099   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:52.633127   69419 pod_ready.go:81] duration metric: took 5.007638065s for pod "coredns-5cfdc65f69-qz5f7" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.633136   69419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:52.480576   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.979758   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:50.296349   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:50.796922   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.296977   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:51.797050   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.296223   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:52.796774   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.296621   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:53.796300   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.296480   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:54.796902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.515017   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:57.515244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:54.640065   69419 pod_ready.go:102] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.648288   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.648318   69419 pod_ready.go:81] duration metric: took 4.015175534s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.648327   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.653979   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.654012   69419 pod_ready.go:81] duration metric: took 5.676586ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.654027   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664507   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.664533   69419 pod_ready.go:81] duration metric: took 10.499453ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.664544   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669414   69419 pod_ready.go:92] pod "kube-proxy-ggh4w" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.669439   69419 pod_ready.go:81] duration metric: took 4.888994ms for pod "kube-proxy-ggh4w" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.669449   69419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673888   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:48:56.673913   69419 pod_ready.go:81] duration metric: took 4.457007ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:56.673924   69419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	I0729 11:48:58.682501   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:56.982680   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:59.479587   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:48:55.296128   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:55.796141   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.296196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:56.796435   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.296155   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:57.796741   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:58.796190   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.296902   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:48:59.797062   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.013753   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:02.014435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.180620   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.183481   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:01.481530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:03.978979   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:00.296074   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:00.796430   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.296402   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:01.796722   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.296594   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:02.796193   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.297020   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:03.796865   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.296072   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.796318   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:04.015636   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:06.514933   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.681102   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.681462   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.979240   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:07.979773   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.979865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:05.296957   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:05.796485   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.296051   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:06.796953   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.296457   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:07.796342   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.296933   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:08.796449   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.297078   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.796713   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:09.014934   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:11.515032   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:13.515665   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:09.683191   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.181155   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.182012   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:12.482327   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:14.979064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:10.296959   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:10.796677   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.296975   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:11.796715   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.296262   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:12.796937   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:13.296939   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:13.297014   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:13.340009   70480 cri.go:89] found id: ""
	I0729 11:49:13.340034   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.340041   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:13.340047   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:13.340112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:13.378003   70480 cri.go:89] found id: ""
	I0729 11:49:13.378029   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.378037   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:13.378044   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:13.378098   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:13.417130   70480 cri.go:89] found id: ""
	I0729 11:49:13.417162   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.417169   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:13.417175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:13.417253   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:13.461969   70480 cri.go:89] found id: ""
	I0729 11:49:13.462001   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.462012   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:13.462019   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:13.462089   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:13.502341   70480 cri.go:89] found id: ""
	I0729 11:49:13.502369   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.502375   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:13.502382   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:13.502434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:13.538099   70480 cri.go:89] found id: ""
	I0729 11:49:13.538123   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.538137   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:13.538143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:13.538192   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:13.575136   70480 cri.go:89] found id: ""
	I0729 11:49:13.575165   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.575172   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:13.575180   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:13.575241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:13.609066   70480 cri.go:89] found id: ""
	I0729 11:49:13.609098   70480 logs.go:276] 0 containers: []
	W0729 11:49:13.609106   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:13.609114   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:13.609126   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:13.622813   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:13.622842   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:13.765497   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:13.765519   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:13.765532   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:13.831517   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:13.831554   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:13.877738   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:13.877771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.015086   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:18.514995   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.683827   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.180229   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.979975   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:19.479362   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:16.429475   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:16.447454   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:16.447528   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:16.486991   70480 cri.go:89] found id: ""
	I0729 11:49:16.487018   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.487028   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:16.487036   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:16.487095   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:16.526155   70480 cri.go:89] found id: ""
	I0729 11:49:16.526180   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.526187   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:16.526192   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:16.526251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:16.562054   70480 cri.go:89] found id: ""
	I0729 11:49:16.562080   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.562156   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:16.562175   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:16.562229   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:16.598866   70480 cri.go:89] found id: ""
	I0729 11:49:16.598896   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.598907   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:16.598915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:16.598984   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:16.637590   70480 cri.go:89] found id: ""
	I0729 11:49:16.637615   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.637623   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:16.637628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:16.637677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:16.676712   70480 cri.go:89] found id: ""
	I0729 11:49:16.676738   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.676749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:16.676756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:16.676844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:16.717213   70480 cri.go:89] found id: ""
	I0729 11:49:16.717242   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.717250   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:16.717256   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:16.717309   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:16.755619   70480 cri.go:89] found id: ""
	I0729 11:49:16.755644   70480 logs.go:276] 0 containers: []
	W0729 11:49:16.755652   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:16.755660   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:16.755677   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:16.825987   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:16.826023   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:16.869351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:16.869386   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:16.920850   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:16.920888   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:16.935884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:16.935921   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:17.021524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.521810   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:19.534761   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:19.534826   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:19.571988   70480 cri.go:89] found id: ""
	I0729 11:49:19.572019   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.572029   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:19.572037   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:19.572097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:19.607389   70480 cri.go:89] found id: ""
	I0729 11:49:19.607418   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.607427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:19.607434   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:19.607496   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:19.645810   70480 cri.go:89] found id: ""
	I0729 11:49:19.645842   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.645853   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:19.645861   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:19.645924   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:19.683628   70480 cri.go:89] found id: ""
	I0729 11:49:19.683655   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.683663   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:19.683669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:19.683715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:19.719396   70480 cri.go:89] found id: ""
	I0729 11:49:19.719424   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.719435   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:19.719442   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:19.719503   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:19.755341   70480 cri.go:89] found id: ""
	I0729 11:49:19.755372   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.755383   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:19.755390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:19.755443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:19.792562   70480 cri.go:89] found id: ""
	I0729 11:49:19.792594   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.792604   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:19.792611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:19.792674   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:19.826783   70480 cri.go:89] found id: ""
	I0729 11:49:19.826808   70480 logs.go:276] 0 containers: []
	W0729 11:49:19.826815   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:19.826824   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:19.826835   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:19.878538   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:19.878573   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:19.893066   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:19.893094   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:19.966152   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:19.966177   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:19.966190   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:20.042796   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:20.042831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:20.515422   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.016350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.681192   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.681786   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:21.486048   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:23.979078   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:22.581639   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:22.595713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:22.595791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:22.639198   70480 cri.go:89] found id: ""
	I0729 11:49:22.639227   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.639239   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:22.639247   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:22.639304   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:22.681073   70480 cri.go:89] found id: ""
	I0729 11:49:22.681103   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.681117   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:22.681124   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:22.681183   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:22.717186   70480 cri.go:89] found id: ""
	I0729 11:49:22.717216   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.717226   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:22.717233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:22.717293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:22.755508   70480 cri.go:89] found id: ""
	I0729 11:49:22.755536   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.755546   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:22.755563   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:22.755626   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:22.800450   70480 cri.go:89] found id: ""
	I0729 11:49:22.800484   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.800495   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:22.800503   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:22.800567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:22.845555   70480 cri.go:89] found id: ""
	I0729 11:49:22.845581   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.845588   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:22.845594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:22.845643   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:22.895449   70480 cri.go:89] found id: ""
	I0729 11:49:22.895476   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.895483   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:22.895488   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:22.895536   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:22.943041   70480 cri.go:89] found id: ""
	I0729 11:49:22.943071   70480 logs.go:276] 0 containers: []
	W0729 11:49:22.943081   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:22.943092   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:22.943108   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:23.002403   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:23.002448   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:23.019436   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:23.019463   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:23.090680   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:23.090718   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:23.090734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:23.173647   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:23.173687   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:25.515416   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.014796   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.181898   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.680932   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:26.481482   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:28.980230   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:25.719961   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:25.733760   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:25.733829   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:25.769965   70480 cri.go:89] found id: ""
	I0729 11:49:25.769997   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.770008   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:25.770015   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:25.770079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:25.805786   70480 cri.go:89] found id: ""
	I0729 11:49:25.805818   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.805829   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:25.805836   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:25.805899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:25.841948   70480 cri.go:89] found id: ""
	I0729 11:49:25.841978   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.841988   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:25.841996   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:25.842056   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:25.880600   70480 cri.go:89] found id: ""
	I0729 11:49:25.880626   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.880636   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:25.880644   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:25.880710   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:25.918637   70480 cri.go:89] found id: ""
	I0729 11:49:25.918671   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.918683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:25.918691   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:25.918766   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:25.955683   70480 cri.go:89] found id: ""
	I0729 11:49:25.955716   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.955726   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:25.955733   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:25.955793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:25.991796   70480 cri.go:89] found id: ""
	I0729 11:49:25.991826   70480 logs.go:276] 0 containers: []
	W0729 11:49:25.991835   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:25.991844   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:25.991908   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:26.027595   70480 cri.go:89] found id: ""
	I0729 11:49:26.027623   70480 logs.go:276] 0 containers: []
	W0729 11:49:26.027634   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:26.027644   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:26.027658   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:26.114463   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:26.114500   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:26.156798   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:26.156834   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:26.206910   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:26.206940   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:26.221037   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:26.221065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:26.293788   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:28.794321   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:28.809573   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:28.809632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:28.860918   70480 cri.go:89] found id: ""
	I0729 11:49:28.860945   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.860952   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:28.860958   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:28.861011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:28.913038   70480 cri.go:89] found id: ""
	I0729 11:49:28.913069   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.913078   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:28.913085   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:28.913147   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:28.963679   70480 cri.go:89] found id: ""
	I0729 11:49:28.963704   70480 logs.go:276] 0 containers: []
	W0729 11:49:28.963714   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:28.963722   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:28.963787   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:29.003935   70480 cri.go:89] found id: ""
	I0729 11:49:29.003962   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.003970   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:29.003976   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:29.004033   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:29.044990   70480 cri.go:89] found id: ""
	I0729 11:49:29.045027   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.045034   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:29.045040   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:29.045096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:29.079923   70480 cri.go:89] found id: ""
	I0729 11:49:29.079945   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.079953   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:29.079958   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:29.080004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:29.117495   70480 cri.go:89] found id: ""
	I0729 11:49:29.117520   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.117528   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:29.117534   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:29.117580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:29.155254   70480 cri.go:89] found id: ""
	I0729 11:49:29.155285   70480 logs.go:276] 0 containers: []
	W0729 11:49:29.155295   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:29.155305   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:29.155319   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:29.207659   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:29.207698   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:29.221875   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:29.221904   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:29.295613   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:29.295637   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:29.295647   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:29.376114   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:29.376148   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:30.515987   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.015616   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:30.687554   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.180446   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.480064   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:33.480740   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:31.916592   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:31.930301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:31.930373   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:31.969552   70480 cri.go:89] found id: ""
	I0729 11:49:31.969580   70480 logs.go:276] 0 containers: []
	W0729 11:49:31.969588   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:31.969594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:31.969650   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:32.003765   70480 cri.go:89] found id: ""
	I0729 11:49:32.003795   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.003804   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:32.003811   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:32.003873   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:32.038447   70480 cri.go:89] found id: ""
	I0729 11:49:32.038475   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.038486   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:32.038492   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:32.038558   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:32.077766   70480 cri.go:89] found id: ""
	I0729 11:49:32.077793   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.077805   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:32.077813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:32.077866   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:32.112599   70480 cri.go:89] found id: ""
	I0729 11:49:32.112630   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.112640   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:32.112648   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:32.112711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:32.150385   70480 cri.go:89] found id: ""
	I0729 11:49:32.150410   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.150417   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:32.150423   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:32.150481   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:32.185148   70480 cri.go:89] found id: ""
	I0729 11:49:32.185172   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.185182   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:32.185189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:32.185251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:32.219652   70480 cri.go:89] found id: ""
	I0729 11:49:32.219685   70480 logs.go:276] 0 containers: []
	W0729 11:49:32.219696   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:32.219706   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:32.219720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:32.233440   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:32.233468   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:32.300495   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:32.300523   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:32.300540   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:32.381361   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:32.381400   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:32.422678   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:32.422730   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:34.974183   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:34.987832   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:34.987912   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:35.028216   70480 cri.go:89] found id: ""
	I0729 11:49:35.028251   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.028262   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:35.028269   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:35.028333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:35.067585   70480 cri.go:89] found id: ""
	I0729 11:49:35.067616   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.067626   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:35.067634   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:35.067698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:35.103319   70480 cri.go:89] found id: ""
	I0729 11:49:35.103346   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.103355   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:35.103362   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:35.103426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:35.138637   70480 cri.go:89] found id: ""
	I0729 11:49:35.138673   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.138714   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:35.138726   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:35.138804   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:35.176249   70480 cri.go:89] found id: ""
	I0729 11:49:35.176285   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.176293   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:35.176298   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:35.176358   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:35.218168   70480 cri.go:89] found id: ""
	I0729 11:49:35.218194   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.218202   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:35.218208   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:35.218265   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:35.515188   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.518451   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.180771   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:37.181078   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.979448   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:38.482849   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:35.253594   70480 cri.go:89] found id: ""
	I0729 11:49:35.253634   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.253641   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:35.253655   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:35.253716   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:35.289229   70480 cri.go:89] found id: ""
	I0729 11:49:35.289258   70480 logs.go:276] 0 containers: []
	W0729 11:49:35.289269   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:35.289279   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:35.289294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:35.341152   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:35.341186   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:35.355884   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:35.355925   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:35.426135   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:35.426160   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:35.426172   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:35.508387   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:35.508422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.047364   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:38.061026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:38.061088   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:38.096252   70480 cri.go:89] found id: ""
	I0729 11:49:38.096280   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.096290   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:38.096297   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:38.096365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:38.133007   70480 cri.go:89] found id: ""
	I0729 11:49:38.133040   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.133051   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:38.133058   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:38.133130   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:38.168040   70480 cri.go:89] found id: ""
	I0729 11:49:38.168063   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.168073   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:38.168086   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:38.168160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:38.205440   70480 cri.go:89] found id: ""
	I0729 11:49:38.205464   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.205471   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:38.205476   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:38.205524   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:38.243345   70480 cri.go:89] found id: ""
	I0729 11:49:38.243373   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.243383   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:38.243390   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:38.243449   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:38.278506   70480 cri.go:89] found id: ""
	I0729 11:49:38.278537   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.278549   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:38.278557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:38.278616   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:38.317917   70480 cri.go:89] found id: ""
	I0729 11:49:38.317951   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.317962   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:38.317970   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:38.318032   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:38.353198   70480 cri.go:89] found id: ""
	I0729 11:49:38.353228   70480 logs.go:276] 0 containers: []
	W0729 11:49:38.353236   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:38.353245   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:38.353259   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:38.367239   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:38.367268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:38.445964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:38.445989   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:38.446002   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:38.528232   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:38.528268   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:38.565958   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:38.565992   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:40.014625   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:42.015244   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:39.682072   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.682635   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:44.180224   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:40.979943   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:43.481875   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:41.123139   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:41.137233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:41.137299   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:41.179468   70480 cri.go:89] found id: ""
	I0729 11:49:41.179492   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.179502   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:41.179508   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:41.179559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:41.215586   70480 cri.go:89] found id: ""
	I0729 11:49:41.215612   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.215620   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:41.215625   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:41.215682   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:41.259473   70480 cri.go:89] found id: ""
	I0729 11:49:41.259495   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.259503   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:41.259508   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:41.259562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:41.302914   70480 cri.go:89] found id: ""
	I0729 11:49:41.302939   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.302947   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:41.302953   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:41.303012   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:41.339826   70480 cri.go:89] found id: ""
	I0729 11:49:41.339857   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.339868   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:41.339876   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:41.339944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:41.376044   70480 cri.go:89] found id: ""
	I0729 11:49:41.376067   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.376074   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:41.376080   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:41.376126   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:41.412216   70480 cri.go:89] found id: ""
	I0729 11:49:41.412241   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.412249   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:41.412255   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:41.412311   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:41.448264   70480 cri.go:89] found id: ""
	I0729 11:49:41.448294   70480 logs.go:276] 0 containers: []
	W0729 11:49:41.448305   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:41.448315   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:41.448331   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:41.499936   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:41.499974   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:41.517126   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:41.517151   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:41.590153   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:41.590185   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:41.590201   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:41.670830   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:41.670866   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.212782   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:44.226750   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:44.226815   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:44.261492   70480 cri.go:89] found id: ""
	I0729 11:49:44.261517   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.261524   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:44.261530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:44.261577   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:44.296391   70480 cri.go:89] found id: ""
	I0729 11:49:44.296426   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.296435   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:44.296444   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:44.296510   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:44.333335   70480 cri.go:89] found id: ""
	I0729 11:49:44.333365   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.333377   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:44.333384   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:44.333447   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:44.370607   70480 cri.go:89] found id: ""
	I0729 11:49:44.370639   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.370650   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:44.370657   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:44.370734   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:44.404229   70480 cri.go:89] found id: ""
	I0729 11:49:44.404257   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.404265   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:44.404271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:44.404332   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:44.439198   70480 cri.go:89] found id: ""
	I0729 11:49:44.439227   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.439238   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:44.439244   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:44.439302   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:44.473835   70480 cri.go:89] found id: ""
	I0729 11:49:44.473887   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.473899   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:44.473908   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:44.473971   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:44.518006   70480 cri.go:89] found id: ""
	I0729 11:49:44.518031   70480 logs.go:276] 0 containers: []
	W0729 11:49:44.518040   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:44.518050   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:44.518065   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:44.560188   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:44.560222   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:44.609565   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:44.609602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:44.624787   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:44.624826   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:44.707388   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:44.707410   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:44.707422   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:44.515480   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.013967   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:46.181170   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:48.680460   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:45.482413   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.484420   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:49.982145   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:47.283951   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:47.297013   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:47.297080   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:47.331979   70480 cri.go:89] found id: ""
	I0729 11:49:47.332009   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.332018   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:47.332023   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:47.332071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:47.367888   70480 cri.go:89] found id: ""
	I0729 11:49:47.367914   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.367925   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:47.367931   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:47.367991   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:47.409364   70480 cri.go:89] found id: ""
	I0729 11:49:47.409392   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.409404   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:47.409410   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:47.409462   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:47.442554   70480 cri.go:89] found id: ""
	I0729 11:49:47.442583   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.442594   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:47.442602   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:47.442656   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:47.476662   70480 cri.go:89] found id: ""
	I0729 11:49:47.476692   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.476704   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:47.476713   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:47.476775   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:47.513779   70480 cri.go:89] found id: ""
	I0729 11:49:47.513809   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.513819   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:47.513827   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:47.513885   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:47.550023   70480 cri.go:89] found id: ""
	I0729 11:49:47.550047   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.550053   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:47.550059   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:47.550120   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:47.586131   70480 cri.go:89] found id: ""
	I0729 11:49:47.586157   70480 logs.go:276] 0 containers: []
	W0729 11:49:47.586165   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:47.586174   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:47.586187   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:47.671326   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:47.671365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:47.710573   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:47.710601   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:47.763248   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:47.763284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:47.779516   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:47.779545   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:47.857474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:49.014878   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:51.515152   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.515473   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.682492   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:53.179515   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:52.479384   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:54.980972   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:50.358275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:50.371438   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:50.371501   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:50.408776   70480 cri.go:89] found id: ""
	I0729 11:49:50.408803   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.408813   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:50.408820   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:50.408881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:50.443502   70480 cri.go:89] found id: ""
	I0729 11:49:50.443528   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.443536   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:50.443541   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:50.443600   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:50.480429   70480 cri.go:89] found id: ""
	I0729 11:49:50.480454   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.480463   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:50.480470   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:50.480525   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:50.518753   70480 cri.go:89] found id: ""
	I0729 11:49:50.518779   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.518789   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:50.518796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:50.518838   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:50.555970   70480 cri.go:89] found id: ""
	I0729 11:49:50.556000   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.556010   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:50.556022   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:50.556086   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:50.592342   70480 cri.go:89] found id: ""
	I0729 11:49:50.592374   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.592385   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:50.592392   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:50.592458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:50.628772   70480 cri.go:89] found id: ""
	I0729 11:49:50.628801   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.628813   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:50.628859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:50.628919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:50.677549   70480 cri.go:89] found id: ""
	I0729 11:49:50.677579   70480 logs.go:276] 0 containers: []
	W0729 11:49:50.677588   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:50.677598   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:50.677612   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:50.734543   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:50.734579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:50.749418   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:50.749445   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:50.825728   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:50.825754   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:50.825773   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:50.901579   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:50.901615   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.439920   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:53.453322   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:53.453381   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:53.491594   70480 cri.go:89] found id: ""
	I0729 11:49:53.491622   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.491632   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:53.491638   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:53.491698   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:53.527160   70480 cri.go:89] found id: ""
	I0729 11:49:53.527188   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.527201   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:53.527207   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:53.527264   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:53.563787   70480 cri.go:89] found id: ""
	I0729 11:49:53.563819   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.563830   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:53.563838   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:53.563899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:53.601551   70480 cri.go:89] found id: ""
	I0729 11:49:53.601575   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.601583   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:53.601589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:53.601634   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:53.636711   70480 cri.go:89] found id: ""
	I0729 11:49:53.636738   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.636748   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:53.636755   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:53.636824   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:53.677809   70480 cri.go:89] found id: ""
	I0729 11:49:53.677852   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.677864   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:53.677872   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:53.677932   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:53.714548   70480 cri.go:89] found id: ""
	I0729 11:49:53.714579   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.714590   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:53.714597   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:53.714663   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:53.751406   70480 cri.go:89] found id: ""
	I0729 11:49:53.751438   70480 logs.go:276] 0 containers: []
	W0729 11:49:53.751448   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:53.751459   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:53.751474   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:53.834905   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:53.834942   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:53.880818   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:53.880852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:53.935913   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:53.935948   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:53.950053   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:53.950078   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:54.027378   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.014381   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:58.513958   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:55.180502   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.181274   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.182119   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:57.479530   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:59.981806   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:49:56.528379   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:56.541859   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:56.541930   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:56.580566   70480 cri.go:89] found id: ""
	I0729 11:49:56.580612   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.580621   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:56.580629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:56.580687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:56.616390   70480 cri.go:89] found id: ""
	I0729 11:49:56.616419   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.616427   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:56.616433   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:56.616483   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:56.653245   70480 cri.go:89] found id: ""
	I0729 11:49:56.653273   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.653281   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:56.653286   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:56.653345   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:56.693011   70480 cri.go:89] found id: ""
	I0729 11:49:56.693033   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.693041   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:56.693047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:56.693115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:56.729684   70480 cri.go:89] found id: ""
	I0729 11:49:56.729714   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.729723   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:56.729736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:56.729799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:56.764634   70480 cri.go:89] found id: ""
	I0729 11:49:56.764675   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.764684   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:56.764692   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:56.764753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:56.800672   70480 cri.go:89] found id: ""
	I0729 11:49:56.800703   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.800714   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:56.800721   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:56.800784   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:56.838727   70480 cri.go:89] found id: ""
	I0729 11:49:56.838758   70480 logs.go:276] 0 containers: []
	W0729 11:49:56.838769   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:56.838781   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:49:56.838794   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:49:56.918017   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:49:56.918043   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:49:56.918057   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:49:57.011900   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:49:57.011951   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:49:57.055320   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:57.055350   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:49:57.113681   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:49:57.113725   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:49:59.629516   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:49:59.643794   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:49:59.643872   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:49:59.682543   70480 cri.go:89] found id: ""
	I0729 11:49:59.682571   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.682580   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:49:59.682586   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:49:59.682649   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:49:59.719313   70480 cri.go:89] found id: ""
	I0729 11:49:59.719341   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.719352   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:49:59.719360   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:49:59.719415   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:49:59.759567   70480 cri.go:89] found id: ""
	I0729 11:49:59.759593   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.759603   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:49:59.759611   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:49:59.759668   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:49:59.796159   70480 cri.go:89] found id: ""
	I0729 11:49:59.796180   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.796187   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:49:59.796192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:49:59.796247   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:49:59.836178   70480 cri.go:89] found id: ""
	I0729 11:49:59.836199   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.836207   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:49:59.836212   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:49:59.836263   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:49:59.876751   70480 cri.go:89] found id: ""
	I0729 11:49:59.876783   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.876795   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:49:59.876802   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:49:59.876863   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:49:59.916163   70480 cri.go:89] found id: ""
	I0729 11:49:59.916196   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.916207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:49:59.916217   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:49:59.916281   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:49:59.953581   70480 cri.go:89] found id: ""
	I0729 11:49:59.953611   70480 logs.go:276] 0 containers: []
	W0729 11:49:59.953621   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:49:59.953631   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:49:59.953649   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:00.009128   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:00.009167   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:00.024681   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:00.024710   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:00.098939   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:00.098966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:00.098980   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:00.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:00.186125   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:01.015333   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:03.017456   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:01.682621   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.180814   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.480490   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:04.481157   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:02.726382   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:02.740727   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:02.740799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:02.779624   70480 cri.go:89] found id: ""
	I0729 11:50:02.779653   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.779664   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:02.779672   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:02.779731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:02.816044   70480 cri.go:89] found id: ""
	I0729 11:50:02.816076   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.816087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:02.816094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:02.816168   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:02.859406   70480 cri.go:89] found id: ""
	I0729 11:50:02.859434   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.859445   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:02.859453   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:02.859514   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:02.897019   70480 cri.go:89] found id: ""
	I0729 11:50:02.897049   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.897058   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:02.897064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:02.897123   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:02.936810   70480 cri.go:89] found id: ""
	I0729 11:50:02.936843   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.936854   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:02.936860   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:02.936919   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:02.973391   70480 cri.go:89] found id: ""
	I0729 11:50:02.973412   70480 logs.go:276] 0 containers: []
	W0729 11:50:02.973420   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:02.973426   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:02.973485   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:03.010867   70480 cri.go:89] found id: ""
	I0729 11:50:03.010961   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.010983   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:03.011001   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:03.011082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:03.048287   70480 cri.go:89] found id: ""
	I0729 11:50:03.048321   70480 logs.go:276] 0 containers: []
	W0729 11:50:03.048332   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:03.048343   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:03.048360   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:03.102752   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:03.102790   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:03.117732   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:03.117759   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:03.193620   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:03.193643   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:03.193655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:03.277205   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:03.277250   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:05.513602   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:07.514141   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.181449   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:08.682052   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:06.980021   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:09.479308   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:05.833546   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:05.849129   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:05.849191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:05.893059   70480 cri.go:89] found id: ""
	I0729 11:50:05.893094   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.893105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:05.893113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:05.893182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:05.927844   70480 cri.go:89] found id: ""
	I0729 11:50:05.927879   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.927889   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:05.927896   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:05.927960   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:05.964427   70480 cri.go:89] found id: ""
	I0729 11:50:05.964451   70480 logs.go:276] 0 containers: []
	W0729 11:50:05.964458   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:05.964464   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:05.964509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:06.001963   70480 cri.go:89] found id: ""
	I0729 11:50:06.001995   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.002002   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:06.002008   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:06.002055   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:06.038838   70480 cri.go:89] found id: ""
	I0729 11:50:06.038869   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.038880   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:06.038888   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:06.038948   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:06.073952   70480 cri.go:89] found id: ""
	I0729 11:50:06.073985   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.073995   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:06.074003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:06.074063   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:06.110495   70480 cri.go:89] found id: ""
	I0729 11:50:06.110524   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.110535   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:06.110541   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:06.110603   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:06.146851   70480 cri.go:89] found id: ""
	I0729 11:50:06.146893   70480 logs.go:276] 0 containers: []
	W0729 11:50:06.146904   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:06.146915   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:06.146931   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:06.201779   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:06.201814   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:06.216407   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:06.216434   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:06.294362   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:06.294382   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:06.294394   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:06.374381   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:06.374415   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:08.920326   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:08.933875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:08.933940   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:08.974589   70480 cri.go:89] found id: ""
	I0729 11:50:08.974615   70480 logs.go:276] 0 containers: []
	W0729 11:50:08.974623   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:08.974629   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:08.974691   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:09.013032   70480 cri.go:89] found id: ""
	I0729 11:50:09.013056   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.013066   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:09.013075   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:09.013121   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:09.052365   70480 cri.go:89] found id: ""
	I0729 11:50:09.052390   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.052397   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:09.052402   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:09.052450   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:09.098671   70480 cri.go:89] found id: ""
	I0729 11:50:09.098719   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.098731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:09.098739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:09.098799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:09.148033   70480 cri.go:89] found id: ""
	I0729 11:50:09.148062   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.148083   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:09.148091   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:09.148151   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:09.195140   70480 cri.go:89] found id: ""
	I0729 11:50:09.195172   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.195179   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:09.195185   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:09.195244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:09.242315   70480 cri.go:89] found id: ""
	I0729 11:50:09.242346   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.242356   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:09.242364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:09.242428   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:09.278315   70480 cri.go:89] found id: ""
	I0729 11:50:09.278342   70480 logs.go:276] 0 containers: []
	W0729 11:50:09.278353   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:09.278364   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:09.278377   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:09.327622   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:09.327654   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:09.342383   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:09.342416   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:09.420797   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:09.420821   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:09.420832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:09.499308   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:09.499345   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:09.514809   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.515103   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.515311   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.181981   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.681128   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:11.480200   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:13.480991   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:12.042649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:12.057927   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:12.057996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:12.096129   70480 cri.go:89] found id: ""
	I0729 11:50:12.096159   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.096170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:12.096177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:12.096244   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:12.133849   70480 cri.go:89] found id: ""
	I0729 11:50:12.133880   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.133891   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:12.133898   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:12.133963   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:12.171706   70480 cri.go:89] found id: ""
	I0729 11:50:12.171730   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.171738   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:12.171744   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:12.171810   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:12.211248   70480 cri.go:89] found id: ""
	I0729 11:50:12.211285   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.211307   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:12.211315   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:12.211379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:12.247472   70480 cri.go:89] found id: ""
	I0729 11:50:12.247500   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.247510   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:12.247517   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:12.247578   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:12.283818   70480 cri.go:89] found id: ""
	I0729 11:50:12.283847   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.283859   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:12.283866   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:12.283937   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:12.324453   70480 cri.go:89] found id: ""
	I0729 11:50:12.324478   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.324485   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:12.324490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:12.324541   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:12.362504   70480 cri.go:89] found id: ""
	I0729 11:50:12.362531   70480 logs.go:276] 0 containers: []
	W0729 11:50:12.362538   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:12.362546   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:12.362558   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:12.439250   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:12.439278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:12.439295   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:12.521240   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:12.521271   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:12.561881   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:12.561918   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:12.615509   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:12.615548   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:15.130823   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:15.151388   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:15.151457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:15.209596   70480 cri.go:89] found id: ""
	I0729 11:50:15.209645   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.209658   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:15.209668   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:15.209736   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:15.515486   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:18.014350   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.681466   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.686021   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.979592   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:17.980955   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:15.248351   70480 cri.go:89] found id: ""
	I0729 11:50:15.248383   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.248394   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:15.248402   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:15.248459   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:15.287261   70480 cri.go:89] found id: ""
	I0729 11:50:15.287288   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.287296   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:15.287301   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:15.287356   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:15.325197   70480 cri.go:89] found id: ""
	I0729 11:50:15.325221   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.325229   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:15.325234   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:15.325292   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:15.361903   70480 cri.go:89] found id: ""
	I0729 11:50:15.361930   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.361939   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:15.361944   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:15.361994   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:15.399963   70480 cri.go:89] found id: ""
	I0729 11:50:15.399996   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.400007   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:15.400015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:15.400079   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:15.437366   70480 cri.go:89] found id: ""
	I0729 11:50:15.437400   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.437409   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:15.437414   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:15.437476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:15.473784   70480 cri.go:89] found id: ""
	I0729 11:50:15.473810   70480 logs.go:276] 0 containers: []
	W0729 11:50:15.473827   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:15.473837   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:15.473852   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:15.550294   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:15.550328   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:15.550343   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:15.636252   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:15.636297   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:15.681361   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:15.681397   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:15.735415   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:15.735451   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.250767   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:18.265247   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:18.265319   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:18.302793   70480 cri.go:89] found id: ""
	I0729 11:50:18.302819   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.302827   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:18.302833   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:18.302894   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:18.345507   70480 cri.go:89] found id: ""
	I0729 11:50:18.345541   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.345551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:18.345558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:18.345621   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:18.381646   70480 cri.go:89] found id: ""
	I0729 11:50:18.381675   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.381682   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:18.381688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:18.381750   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:18.417233   70480 cri.go:89] found id: ""
	I0729 11:50:18.417261   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.417268   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:18.417275   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:18.417340   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:18.453506   70480 cri.go:89] found id: ""
	I0729 11:50:18.453534   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.453541   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:18.453547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:18.453598   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:18.491886   70480 cri.go:89] found id: ""
	I0729 11:50:18.491910   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.491918   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:18.491923   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:18.491980   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:18.528413   70480 cri.go:89] found id: ""
	I0729 11:50:18.528444   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.528454   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:18.528462   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:18.528518   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:18.565621   70480 cri.go:89] found id: ""
	I0729 11:50:18.565653   70480 logs.go:276] 0 containers: []
	W0729 11:50:18.565663   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:18.565673   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:18.565690   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:18.616796   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:18.616832   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:18.631175   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:18.631202   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:18.712480   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:18.712506   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:18.712520   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:18.797246   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:18.797284   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:20.514492   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:23.016174   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.181252   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.682450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:20.480316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:22.980474   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:21.344260   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:21.358689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:21.358781   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:21.393248   70480 cri.go:89] found id: ""
	I0729 11:50:21.393276   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.393286   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:21.393293   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:21.393352   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:21.426960   70480 cri.go:89] found id: ""
	I0729 11:50:21.426989   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.426999   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:21.427007   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:21.427066   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:21.463521   70480 cri.go:89] found id: ""
	I0729 11:50:21.463545   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.463553   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:21.463559   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:21.463612   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:21.501915   70480 cri.go:89] found id: ""
	I0729 11:50:21.501950   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.501960   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:21.501966   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:21.502023   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:21.538208   70480 cri.go:89] found id: ""
	I0729 11:50:21.538247   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.538258   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:21.538265   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:21.538327   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:21.572572   70480 cri.go:89] found id: ""
	I0729 11:50:21.572594   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.572602   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:21.572607   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:21.572664   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:21.608002   70480 cri.go:89] found id: ""
	I0729 11:50:21.608037   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.608046   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:21.608053   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:21.608103   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:21.643039   70480 cri.go:89] found id: ""
	I0729 11:50:21.643069   70480 logs.go:276] 0 containers: []
	W0729 11:50:21.643080   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:21.643098   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:21.643115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:21.722921   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:21.722960   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:21.768597   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:21.768628   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:21.826974   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:21.827009   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:21.842214   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:21.842242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:21.913217   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:24.413629   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:24.428364   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:24.428458   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:24.469631   70480 cri.go:89] found id: ""
	I0729 11:50:24.469654   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.469661   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:24.469667   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:24.469712   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:24.508201   70480 cri.go:89] found id: ""
	I0729 11:50:24.508231   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.508242   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:24.508254   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:24.508317   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:24.543992   70480 cri.go:89] found id: ""
	I0729 11:50:24.544020   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.544028   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:24.544034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:24.544082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:24.579956   70480 cri.go:89] found id: ""
	I0729 11:50:24.579983   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.579990   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:24.579995   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:24.580051   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:24.616233   70480 cri.go:89] found id: ""
	I0729 11:50:24.616259   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.616267   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:24.616273   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:24.616339   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:24.655131   70480 cri.go:89] found id: ""
	I0729 11:50:24.655159   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.655167   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:24.655173   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:24.655223   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:24.701706   70480 cri.go:89] found id: ""
	I0729 11:50:24.701730   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.701738   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:24.701743   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:24.701799   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:24.739729   70480 cri.go:89] found id: ""
	I0729 11:50:24.739754   70480 logs.go:276] 0 containers: []
	W0729 11:50:24.739762   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:24.739773   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:24.739785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:24.817347   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:24.817390   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:24.858248   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:24.858274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:24.911486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:24.911527   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:24.927180   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:24.927209   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:25.007474   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:25.515125   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.515919   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:24.682503   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.180867   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:29.181299   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:25.478971   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.979128   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:27.507887   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:27.521936   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:27.522004   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:27.555832   70480 cri.go:89] found id: ""
	I0729 11:50:27.555865   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.555875   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:27.555882   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:27.555944   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:27.590482   70480 cri.go:89] found id: ""
	I0729 11:50:27.590509   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.590518   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:27.590526   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:27.590587   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:27.626998   70480 cri.go:89] found id: ""
	I0729 11:50:27.627028   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.627038   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:27.627045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:27.627105   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:27.661980   70480 cri.go:89] found id: ""
	I0729 11:50:27.662015   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.662027   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:27.662034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:27.662096   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:27.696646   70480 cri.go:89] found id: ""
	I0729 11:50:27.696675   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.696683   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:27.696689   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:27.696735   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:27.739393   70480 cri.go:89] found id: ""
	I0729 11:50:27.739421   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.739432   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:27.739439   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:27.739500   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:27.774985   70480 cri.go:89] found id: ""
	I0729 11:50:27.775013   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.775027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:27.775034   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:27.775097   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:27.811526   70480 cri.go:89] found id: ""
	I0729 11:50:27.811555   70480 logs.go:276] 0 containers: []
	W0729 11:50:27.811567   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:27.811578   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:27.811594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:27.866445   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:27.866482   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:27.881961   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:27.881991   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:27.949524   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:27.949543   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:27.949555   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:28.029386   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:28.029418   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:30.014858   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.515721   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:31.183830   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:33.681416   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.479786   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:32.484195   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:34.978772   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:30.571850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:30.586163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:30.586241   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:30.622065   70480 cri.go:89] found id: ""
	I0729 11:50:30.622096   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.622105   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:30.622113   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:30.622189   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:30.659346   70480 cri.go:89] found id: ""
	I0729 11:50:30.659386   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.659398   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:30.659405   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:30.659467   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:30.699376   70480 cri.go:89] found id: ""
	I0729 11:50:30.699403   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.699413   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:30.699421   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:30.699490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:30.734843   70480 cri.go:89] found id: ""
	I0729 11:50:30.734873   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.734892   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:30.734900   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:30.734974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:30.772983   70480 cri.go:89] found id: ""
	I0729 11:50:30.773010   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.773021   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:30.773028   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:30.773084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:30.814777   70480 cri.go:89] found id: ""
	I0729 11:50:30.814805   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.814815   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:30.814823   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:30.814891   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:30.851989   70480 cri.go:89] found id: ""
	I0729 11:50:30.852018   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.852027   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:30.852036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:30.852094   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:30.891692   70480 cri.go:89] found id: ""
	I0729 11:50:30.891716   70480 logs.go:276] 0 containers: []
	W0729 11:50:30.891732   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:30.891743   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:30.891758   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:30.943466   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:30.943498   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:30.957182   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:30.957208   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:31.029695   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:31.029717   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:31.029731   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:31.113329   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:31.113378   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:33.654275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:33.668509   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:33.668581   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:33.706103   70480 cri.go:89] found id: ""
	I0729 11:50:33.706133   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.706144   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:33.706151   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:33.706203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:33.743389   70480 cri.go:89] found id: ""
	I0729 11:50:33.743417   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.743424   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:33.743431   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:33.743482   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:33.781980   70480 cri.go:89] found id: ""
	I0729 11:50:33.782014   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.782025   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:33.782032   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:33.782092   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:33.818049   70480 cri.go:89] found id: ""
	I0729 11:50:33.818080   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.818090   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:33.818098   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:33.818164   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:33.854043   70480 cri.go:89] found id: ""
	I0729 11:50:33.854069   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.854077   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:33.854083   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:33.854144   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:33.891292   70480 cri.go:89] found id: ""
	I0729 11:50:33.891319   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.891329   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:33.891338   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:33.891400   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:33.927871   70480 cri.go:89] found id: ""
	I0729 11:50:33.927904   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.927915   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:33.927922   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:33.927979   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:33.964136   70480 cri.go:89] found id: ""
	I0729 11:50:33.964163   70480 logs.go:276] 0 containers: []
	W0729 11:50:33.964170   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:33.964181   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:33.964195   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:34.015262   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:34.015292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:34.029677   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:34.029711   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:34.105907   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:34.105932   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:34.105945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:34.186085   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:34.186120   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:35.014404   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:37.015435   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:35.681610   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:38.181485   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.979912   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:39.480001   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:36.740552   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:36.754472   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:36.754533   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:36.793126   70480 cri.go:89] found id: ""
	I0729 11:50:36.793162   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.793170   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:36.793178   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:36.793235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:36.831095   70480 cri.go:89] found id: ""
	I0729 11:50:36.831152   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.831166   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:36.831176   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:36.831235   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:36.869236   70480 cri.go:89] found id: ""
	I0729 11:50:36.869266   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.869277   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:36.869284   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:36.869343   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:36.907163   70480 cri.go:89] found id: ""
	I0729 11:50:36.907195   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.907203   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:36.907209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:36.907267   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:36.943074   70480 cri.go:89] found id: ""
	I0729 11:50:36.943101   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.943110   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:36.943115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:36.943177   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:36.980411   70480 cri.go:89] found id: ""
	I0729 11:50:36.980432   70480 logs.go:276] 0 containers: []
	W0729 11:50:36.980442   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:36.980449   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:36.980509   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:37.017994   70480 cri.go:89] found id: ""
	I0729 11:50:37.018015   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.018028   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:37.018035   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:37.018091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:37.052761   70480 cri.go:89] found id: ""
	I0729 11:50:37.052787   70480 logs.go:276] 0 containers: []
	W0729 11:50:37.052797   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:37.052806   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:37.052818   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:37.105925   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:37.105970   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:37.119829   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:37.119862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:37.198953   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:37.198992   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:37.199013   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:37.276947   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:37.276987   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:39.816196   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:39.830387   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:39.830460   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:39.864879   70480 cri.go:89] found id: ""
	I0729 11:50:39.864914   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.864921   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:39.864927   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:39.864974   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:39.902730   70480 cri.go:89] found id: ""
	I0729 11:50:39.902761   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.902772   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:39.902779   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:39.902832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:39.937630   70480 cri.go:89] found id: ""
	I0729 11:50:39.937656   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.937663   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:39.937669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:39.937718   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:39.972692   70480 cri.go:89] found id: ""
	I0729 11:50:39.972723   70480 logs.go:276] 0 containers: []
	W0729 11:50:39.972731   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:39.972736   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:39.972798   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:40.016152   70480 cri.go:89] found id: ""
	I0729 11:50:40.016179   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.016187   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:40.016192   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:40.016239   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:40.051206   70480 cri.go:89] found id: ""
	I0729 11:50:40.051233   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.051243   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:40.051249   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:40.051310   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:40.091007   70480 cri.go:89] found id: ""
	I0729 11:50:40.091039   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.091050   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:40.091057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:40.091122   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:40.130939   70480 cri.go:89] found id: ""
	I0729 11:50:40.130968   70480 logs.go:276] 0 containers: []
	W0729 11:50:40.130979   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:40.130992   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:40.131011   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:40.210551   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:40.210579   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:40.210594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:39.514683   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.515289   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.515935   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.681167   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:42.683536   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:41.978995   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:43.979276   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:40.292853   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:40.292889   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:40.333337   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:40.333365   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:40.383219   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:40.383254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:42.898275   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:42.913287   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:42.913365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:42.948662   70480 cri.go:89] found id: ""
	I0729 11:50:42.948696   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.948709   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:42.948716   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:42.948768   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:42.989511   70480 cri.go:89] found id: ""
	I0729 11:50:42.989541   70480 logs.go:276] 0 containers: []
	W0729 11:50:42.989551   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:42.989558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:42.989609   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:43.025987   70480 cri.go:89] found id: ""
	I0729 11:50:43.026013   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.026021   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:43.026026   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:43.026082   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:43.062210   70480 cri.go:89] found id: ""
	I0729 11:50:43.062243   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.062253   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:43.062271   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:43.062344   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:43.104967   70480 cri.go:89] found id: ""
	I0729 11:50:43.104990   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.104997   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:43.105003   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:43.105081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:43.146433   70480 cri.go:89] found id: ""
	I0729 11:50:43.146467   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.146479   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:43.146487   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:43.146551   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:43.199618   70480 cri.go:89] found id: ""
	I0729 11:50:43.199647   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.199658   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:43.199665   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:43.199721   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:43.238994   70480 cri.go:89] found id: ""
	I0729 11:50:43.239025   70480 logs.go:276] 0 containers: []
	W0729 11:50:43.239036   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:43.239053   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:43.239071   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:43.253185   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:43.253211   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:43.325381   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:43.325399   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:43.325410   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:43.408547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:43.408582   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:43.447251   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:43.447281   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:45.516120   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.015236   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:45.181461   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:47.682648   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.478782   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:48.479013   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:46.001731   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:46.017006   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:46.017084   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:46.056451   70480 cri.go:89] found id: ""
	I0729 11:50:46.056480   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.056492   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:46.056500   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:46.056562   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:46.094715   70480 cri.go:89] found id: ""
	I0729 11:50:46.094754   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.094762   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:46.094767   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:46.094817   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:46.131440   70480 cri.go:89] found id: ""
	I0729 11:50:46.131471   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.131483   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:46.131490   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:46.131548   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:46.171239   70480 cri.go:89] found id: ""
	I0729 11:50:46.171264   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.171271   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:46.171278   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:46.171331   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:46.208060   70480 cri.go:89] found id: ""
	I0729 11:50:46.208094   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.208102   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:46.208108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:46.208162   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:46.244765   70480 cri.go:89] found id: ""
	I0729 11:50:46.244797   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.244806   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:46.244813   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:46.244874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:46.281932   70480 cri.go:89] found id: ""
	I0729 11:50:46.281965   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.281977   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:46.281986   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:46.282058   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:46.319589   70480 cri.go:89] found id: ""
	I0729 11:50:46.319618   70480 logs.go:276] 0 containers: []
	W0729 11:50:46.319629   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:46.319640   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:46.319655   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:46.369821   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:46.369859   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:46.384828   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:46.384862   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:46.460755   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:46.460780   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:46.460793   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:46.543424   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:46.543459   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.089661   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:49.103781   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:49.103878   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:49.140206   70480 cri.go:89] found id: ""
	I0729 11:50:49.140234   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.140242   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:49.140248   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:49.140306   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:49.181047   70480 cri.go:89] found id: ""
	I0729 11:50:49.181077   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.181087   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:49.181094   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:49.181160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:49.217106   70480 cri.go:89] found id: ""
	I0729 11:50:49.217135   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.217145   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:49.217152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:49.217213   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:49.256920   70480 cri.go:89] found id: ""
	I0729 11:50:49.256955   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.256966   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:49.256973   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:49.257040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:49.294586   70480 cri.go:89] found id: ""
	I0729 11:50:49.294610   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.294618   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:49.294623   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:49.294687   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:49.332441   70480 cri.go:89] found id: ""
	I0729 11:50:49.332467   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.332475   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:49.332480   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:49.332538   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:49.370255   70480 cri.go:89] found id: ""
	I0729 11:50:49.370281   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.370288   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:49.370293   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:49.370348   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:49.407106   70480 cri.go:89] found id: ""
	I0729 11:50:49.407142   70480 logs.go:276] 0 containers: []
	W0729 11:50:49.407150   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:49.407158   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:49.407170   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:49.487262   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:49.487294   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:49.530381   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:49.530407   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:49.585601   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:49.585640   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:49.608868   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:49.608909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:49.717369   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:50.513962   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.514789   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.181505   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.681593   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:50.483654   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.978973   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:54.979504   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:52.218194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:52.232762   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:52.232830   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:52.269399   70480 cri.go:89] found id: ""
	I0729 11:50:52.269427   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.269435   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:52.269441   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:52.269488   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:52.304375   70480 cri.go:89] found id: ""
	I0729 11:50:52.304405   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.304415   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:52.304421   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:52.304471   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:52.339374   70480 cri.go:89] found id: ""
	I0729 11:50:52.339406   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.339423   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:52.339431   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:52.339490   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:52.375675   70480 cri.go:89] found id: ""
	I0729 11:50:52.375704   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.375715   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:52.375724   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:52.375785   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:52.411587   70480 cri.go:89] found id: ""
	I0729 11:50:52.411612   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.411620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:52.411625   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:52.411677   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:52.445267   70480 cri.go:89] found id: ""
	I0729 11:50:52.445291   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.445301   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:52.445308   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:52.445367   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:52.491320   70480 cri.go:89] found id: ""
	I0729 11:50:52.491352   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.491361   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:52.491376   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:52.491432   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:52.528158   70480 cri.go:89] found id: ""
	I0729 11:50:52.528195   70480 logs.go:276] 0 containers: []
	W0729 11:50:52.528205   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:52.528214   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:52.528229   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:52.584122   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:52.584156   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:52.598572   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:52.598611   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:52.675433   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:52.675451   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:52.675473   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:52.759393   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:52.759433   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:55.014201   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.015293   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.181456   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:57.680557   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:56.980460   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:58.982179   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:55.300231   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:55.316902   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:55.316972   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:55.362311   70480 cri.go:89] found id: ""
	I0729 11:50:55.362350   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.362360   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:55.362368   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:55.362434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:55.420481   70480 cri.go:89] found id: ""
	I0729 11:50:55.420506   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.420519   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:55.420524   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:55.420582   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:55.472515   70480 cri.go:89] found id: ""
	I0729 11:50:55.472546   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.472556   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:55.472565   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:55.472625   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:55.513203   70480 cri.go:89] found id: ""
	I0729 11:50:55.513224   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.513232   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:55.513237   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:55.513290   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:55.548410   70480 cri.go:89] found id: ""
	I0729 11:50:55.548440   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.548450   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:55.548457   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:55.548517   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:55.584532   70480 cri.go:89] found id: ""
	I0729 11:50:55.584561   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.584571   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:55.584577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:55.584640   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:55.627593   70480 cri.go:89] found id: ""
	I0729 11:50:55.627623   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.627652   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:55.627660   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:55.627723   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:55.667981   70480 cri.go:89] found id: ""
	I0729 11:50:55.668005   70480 logs.go:276] 0 containers: []
	W0729 11:50:55.668014   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:55.668021   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:55.668050   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:55.721569   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:55.721605   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:55.735570   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:55.735598   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:55.810549   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:55.810578   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:55.810590   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:55.892547   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:55.892594   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:58.435946   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:50:58.449628   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:50:58.449693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:50:58.487449   70480 cri.go:89] found id: ""
	I0729 11:50:58.487481   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.487499   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:50:58.487507   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:50:58.487574   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:50:58.524030   70480 cri.go:89] found id: ""
	I0729 11:50:58.524051   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.524058   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:50:58.524063   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:50:58.524118   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:50:58.560348   70480 cri.go:89] found id: ""
	I0729 11:50:58.560374   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.560381   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:50:58.560386   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:50:58.560434   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:50:58.598947   70480 cri.go:89] found id: ""
	I0729 11:50:58.598974   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.598984   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:50:58.598992   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:50:58.599050   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:50:58.634763   70480 cri.go:89] found id: ""
	I0729 11:50:58.634789   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.634799   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:50:58.634807   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:50:58.634867   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:50:58.671609   70480 cri.go:89] found id: ""
	I0729 11:50:58.671639   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.671649   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:50:58.671656   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:50:58.671715   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:50:58.712629   70480 cri.go:89] found id: ""
	I0729 11:50:58.712654   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.712661   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:50:58.712669   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:50:58.712719   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:50:58.749749   70480 cri.go:89] found id: ""
	I0729 11:50:58.749779   70480 logs.go:276] 0 containers: []
	W0729 11:50:58.749788   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:50:58.749799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:50:58.749813   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:50:58.807124   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:50:58.807159   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:50:58.821486   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:50:58.821513   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:50:58.889226   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:50:58.889248   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:50:58.889263   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:50:58.968593   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:50:58.968633   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:50:59.515675   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.015006   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:50:59.681443   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:02.181409   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:04.183067   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.482470   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:03.482794   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:01.511112   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:01.525124   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:01.525221   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:01.559415   70480 cri.go:89] found id: ""
	I0729 11:51:01.559450   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.559462   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:01.559469   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:01.559530   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:01.593614   70480 cri.go:89] found id: ""
	I0729 11:51:01.593644   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.593655   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:01.593661   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:01.593722   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:01.632365   70480 cri.go:89] found id: ""
	I0729 11:51:01.632398   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.632409   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:01.632416   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:01.632476   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:01.670518   70480 cri.go:89] found id: ""
	I0729 11:51:01.670543   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.670550   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:01.670557   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:01.670618   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:01.709732   70480 cri.go:89] found id: ""
	I0729 11:51:01.709755   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.709762   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:01.709768   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:01.709813   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:01.751710   70480 cri.go:89] found id: ""
	I0729 11:51:01.751739   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.751749   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:01.751756   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:01.751818   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:01.795819   70480 cri.go:89] found id: ""
	I0729 11:51:01.795848   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.795859   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:01.795867   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:01.795931   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:01.838654   70480 cri.go:89] found id: ""
	I0729 11:51:01.838684   70480 logs.go:276] 0 containers: []
	W0729 11:51:01.838691   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:01.838719   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:01.838734   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:01.894328   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:01.894370   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:01.908870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:01.908894   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:01.982740   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:01.982770   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:01.982785   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:02.068332   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:02.068376   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.614758   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:04.629646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:04.629708   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:04.666502   70480 cri.go:89] found id: ""
	I0729 11:51:04.666526   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.666534   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:04.666540   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:04.666590   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:04.705373   70480 cri.go:89] found id: ""
	I0729 11:51:04.705398   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.705407   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:04.705414   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:04.705468   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:04.745076   70480 cri.go:89] found id: ""
	I0729 11:51:04.745110   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.745122   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:04.745130   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:04.745195   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:04.786923   70480 cri.go:89] found id: ""
	I0729 11:51:04.786953   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.786963   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:04.786971   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:04.787031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:04.824483   70480 cri.go:89] found id: ""
	I0729 11:51:04.824514   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.824522   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:04.824530   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:04.824591   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:04.861347   70480 cri.go:89] found id: ""
	I0729 11:51:04.861379   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.861390   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:04.861399   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:04.861456   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:04.902102   70480 cri.go:89] found id: ""
	I0729 11:51:04.902136   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.902143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:04.902149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:04.902206   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:04.942533   70480 cri.go:89] found id: ""
	I0729 11:51:04.942560   70480 logs.go:276] 0 containers: []
	W0729 11:51:04.942568   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:04.942576   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:04.942589   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:04.997272   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:04.997309   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:05.012276   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:05.012307   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:05.088408   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:05.088434   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:05.088462   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:05.173414   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:05.173452   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:04.514092   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.016150   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:06.680804   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:08.681656   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:05.978846   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.979974   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:07.718649   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:07.732209   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:07.732282   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:07.767128   70480 cri.go:89] found id: ""
	I0729 11:51:07.767157   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.767164   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:07.767170   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:07.767233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:07.803210   70480 cri.go:89] found id: ""
	I0729 11:51:07.803243   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.803253   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:07.803260   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:07.803323   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:07.839694   70480 cri.go:89] found id: ""
	I0729 11:51:07.839718   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.839726   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:07.839732   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:07.839779   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:07.876660   70480 cri.go:89] found id: ""
	I0729 11:51:07.876687   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.876695   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:07.876701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:07.876758   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:07.923089   70480 cri.go:89] found id: ""
	I0729 11:51:07.923119   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.923128   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:07.923139   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:07.923191   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:07.961178   70480 cri.go:89] found id: ""
	I0729 11:51:07.961205   70480 logs.go:276] 0 containers: []
	W0729 11:51:07.961214   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:07.961223   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:07.961283   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:08.001007   70480 cri.go:89] found id: ""
	I0729 11:51:08.001031   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.001038   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:08.001047   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:08.001115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:08.036930   70480 cri.go:89] found id: ""
	I0729 11:51:08.036956   70480 logs.go:276] 0 containers: []
	W0729 11:51:08.036964   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:08.036972   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:08.036982   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:08.091405   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:08.091440   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:08.106456   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:08.106483   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:08.181814   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:08.181835   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:08.181846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:08.267663   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:08.267701   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:09.514482   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.514970   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:11.182959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:13.680925   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.481614   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:12.482016   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:14.980848   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:10.814602   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:10.828290   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:10.828351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:10.866374   70480 cri.go:89] found id: ""
	I0729 11:51:10.866398   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.866406   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:10.866412   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:10.866457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:10.908253   70480 cri.go:89] found id: ""
	I0729 11:51:10.908286   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.908295   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:10.908301   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:10.908370   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:10.952686   70480 cri.go:89] found id: ""
	I0729 11:51:10.952709   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.952717   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:10.952723   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:10.952771   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:10.991621   70480 cri.go:89] found id: ""
	I0729 11:51:10.991650   70480 logs.go:276] 0 containers: []
	W0729 11:51:10.991661   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:10.991668   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:10.991728   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:11.028420   70480 cri.go:89] found id: ""
	I0729 11:51:11.028451   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.028462   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:11.028469   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:11.028520   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:11.065213   70480 cri.go:89] found id: ""
	I0729 11:51:11.065248   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.065259   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:11.065266   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:11.065328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:11.104008   70480 cri.go:89] found id: ""
	I0729 11:51:11.104051   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.104064   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:11.104073   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:11.104134   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:11.140872   70480 cri.go:89] found id: ""
	I0729 11:51:11.140913   70480 logs.go:276] 0 containers: []
	W0729 11:51:11.140925   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:11.140936   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:11.140958   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:11.222498   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:11.222535   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:11.265869   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:11.265909   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:11.319889   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:11.319926   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:11.334069   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:11.334100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:11.412461   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:13.913194   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:13.927057   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:13.927141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:13.963267   70480 cri.go:89] found id: ""
	I0729 11:51:13.963302   70480 logs.go:276] 0 containers: []
	W0729 11:51:13.963312   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:13.963321   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:13.963386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:14.002591   70480 cri.go:89] found id: ""
	I0729 11:51:14.002621   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.002633   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:14.002640   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:14.002737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:14.041377   70480 cri.go:89] found id: ""
	I0729 11:51:14.041410   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.041422   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:14.041437   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:14.041502   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:14.081794   70480 cri.go:89] found id: ""
	I0729 11:51:14.081821   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.081829   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:14.081835   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:14.081888   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:14.121223   70480 cri.go:89] found id: ""
	I0729 11:51:14.121251   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.121261   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:14.121269   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:14.121333   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:14.156762   70480 cri.go:89] found id: ""
	I0729 11:51:14.156798   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.156808   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:14.156817   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:14.156881   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:14.195068   70480 cri.go:89] found id: ""
	I0729 11:51:14.195098   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.195108   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:14.195115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:14.195185   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:14.233361   70480 cri.go:89] found id: ""
	I0729 11:51:14.233392   70480 logs.go:276] 0 containers: []
	W0729 11:51:14.233402   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:14.233413   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:14.233428   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:14.289276   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:14.289318   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:14.304505   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:14.304536   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:14.376648   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:14.376673   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:14.376685   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:14.453538   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:14.453574   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:14.016205   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:16.514374   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.514902   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:15.681382   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:18.181597   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.479865   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:19.480304   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:17.004150   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:17.018324   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:17.018401   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:17.056923   70480 cri.go:89] found id: ""
	I0729 11:51:17.056959   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.056970   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:17.056978   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:17.057042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:17.093329   70480 cri.go:89] found id: ""
	I0729 11:51:17.093362   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.093374   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:17.093381   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:17.093443   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:17.130340   70480 cri.go:89] found id: ""
	I0729 11:51:17.130372   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.130382   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:17.130391   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:17.130457   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:17.164877   70480 cri.go:89] found id: ""
	I0729 11:51:17.164902   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.164910   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:17.164915   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:17.164962   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:17.201509   70480 cri.go:89] found id: ""
	I0729 11:51:17.201538   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.201549   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:17.201555   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:17.201629   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:17.242097   70480 cri.go:89] found id: ""
	I0729 11:51:17.242121   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.242130   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:17.242136   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:17.242182   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:17.277239   70480 cri.go:89] found id: ""
	I0729 11:51:17.277262   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.277270   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:17.277279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:17.277328   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:17.310991   70480 cri.go:89] found id: ""
	I0729 11:51:17.311022   70480 logs.go:276] 0 containers: []
	W0729 11:51:17.311034   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:17.311046   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:17.311061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:17.386672   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:17.386718   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:17.424880   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:17.424905   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:17.477226   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:17.477255   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:17.492327   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:17.492354   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:17.570971   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.071148   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:20.085734   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:20.085821   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:20.120455   70480 cri.go:89] found id: ""
	I0729 11:51:20.120500   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.120512   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:20.120530   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:20.120589   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:20.165159   70480 cri.go:89] found id: ""
	I0729 11:51:20.165186   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.165193   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:20.165199   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:20.165245   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:20.202147   70480 cri.go:89] found id: ""
	I0729 11:51:20.202168   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.202175   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:20.202182   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:20.202237   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:20.515560   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.014288   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.681542   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.181158   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:21.978106   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:23.979809   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:20.237783   70480 cri.go:89] found id: ""
	I0729 11:51:20.237811   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.237822   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:20.237829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:20.237890   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:20.280812   70480 cri.go:89] found id: ""
	I0729 11:51:20.280839   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.280852   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:20.280858   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:20.280922   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:20.317904   70480 cri.go:89] found id: ""
	I0729 11:51:20.317925   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.317932   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:20.317938   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:20.317986   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:20.356106   70480 cri.go:89] found id: ""
	I0729 11:51:20.356136   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.356143   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:20.356149   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:20.356197   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:20.393483   70480 cri.go:89] found id: ""
	I0729 11:51:20.393514   70480 logs.go:276] 0 containers: []
	W0729 11:51:20.393526   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:20.393537   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:20.393552   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:20.446650   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:20.446716   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:20.460502   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:20.460531   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:20.535717   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:20.535738   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:20.535751   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:20.619068   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:20.619119   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.159775   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:23.174688   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:23.174776   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:23.210990   70480 cri.go:89] found id: ""
	I0729 11:51:23.211017   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.211025   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:23.211031   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:23.211083   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:23.251468   70480 cri.go:89] found id: ""
	I0729 11:51:23.251494   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.251505   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:23.251512   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:23.251567   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:23.288902   70480 cri.go:89] found id: ""
	I0729 11:51:23.288950   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.288961   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:23.288969   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:23.289028   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:23.324544   70480 cri.go:89] found id: ""
	I0729 11:51:23.324583   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.324593   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:23.324604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:23.324681   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:23.363138   70480 cri.go:89] found id: ""
	I0729 11:51:23.363170   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.363180   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:23.363188   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:23.363246   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:23.400107   70480 cri.go:89] found id: ""
	I0729 11:51:23.400136   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.400146   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:23.400163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:23.400224   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:23.437129   70480 cri.go:89] found id: ""
	I0729 11:51:23.437169   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.437180   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:23.437189   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:23.437251   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:23.470782   70480 cri.go:89] found id: ""
	I0729 11:51:23.470811   70480 logs.go:276] 0 containers: []
	W0729 11:51:23.470821   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:23.470831   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:23.470846   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:23.557806   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:23.557843   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:23.601312   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:23.601342   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:23.658042   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:23.658084   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:23.682844   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:23.682878   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:23.773049   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:25.015099   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.518243   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:25.680468   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:27.680741   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.479529   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:28.978442   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:26.273263   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:26.286759   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:26.286822   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:26.325303   70480 cri.go:89] found id: ""
	I0729 11:51:26.325330   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.325340   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:26.325347   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:26.325402   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:26.366994   70480 cri.go:89] found id: ""
	I0729 11:51:26.367019   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.367033   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:26.367040   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:26.367100   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:26.407748   70480 cri.go:89] found id: ""
	I0729 11:51:26.407779   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.407789   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:26.407796   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:26.407856   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:26.443172   70480 cri.go:89] found id: ""
	I0729 11:51:26.443197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.443206   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:26.443214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:26.443275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:26.479906   70480 cri.go:89] found id: ""
	I0729 11:51:26.479928   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.479937   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:26.479945   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:26.480011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:26.518837   70480 cri.go:89] found id: ""
	I0729 11:51:26.518867   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.518877   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:26.518884   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:26.518939   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:26.557168   70480 cri.go:89] found id: ""
	I0729 11:51:26.557197   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.557207   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:26.557214   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:26.557271   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:26.591670   70480 cri.go:89] found id: ""
	I0729 11:51:26.591699   70480 logs.go:276] 0 containers: []
	W0729 11:51:26.591707   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:26.591715   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:26.591727   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:26.606611   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:26.606641   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:26.675726   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:26.675752   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:26.675768   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:26.755738   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:26.755776   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:26.799482   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:26.799518   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:29.353415   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:29.367062   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:29.367141   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:29.404628   70480 cri.go:89] found id: ""
	I0729 11:51:29.404669   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.404677   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:29.404683   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:29.404731   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:29.452828   70480 cri.go:89] found id: ""
	I0729 11:51:29.452858   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.452868   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:29.452877   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:29.452936   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:29.488252   70480 cri.go:89] found id: ""
	I0729 11:51:29.488280   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.488288   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:29.488296   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:29.488357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:29.524837   70480 cri.go:89] found id: ""
	I0729 11:51:29.524863   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.524874   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:29.524890   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:29.524938   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:29.564545   70480 cri.go:89] found id: ""
	I0729 11:51:29.564587   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.564598   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:29.564615   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:29.564686   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:29.604254   70480 cri.go:89] found id: ""
	I0729 11:51:29.604282   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.604292   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:29.604299   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:29.604365   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:29.640107   70480 cri.go:89] found id: ""
	I0729 11:51:29.640137   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.640147   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:29.640152   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:29.640207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:29.677070   70480 cri.go:89] found id: ""
	I0729 11:51:29.677099   70480 logs.go:276] 0 containers: []
	W0729 11:51:29.677109   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:29.677119   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:29.677143   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:29.692434   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:29.692466   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:29.769317   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:29.769344   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:29.769355   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:29.850468   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:29.850505   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:29.890975   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:29.891010   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:30.014896   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.014991   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:29.682442   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.181766   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:34.182032   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:30.979636   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:33.480377   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:32.445320   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:32.459458   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:32.459534   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:32.500629   70480 cri.go:89] found id: ""
	I0729 11:51:32.500652   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.500659   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:32.500664   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:32.500711   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:32.538737   70480 cri.go:89] found id: ""
	I0729 11:51:32.538763   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.538771   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:32.538777   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:32.538837   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:32.578093   70480 cri.go:89] found id: ""
	I0729 11:51:32.578124   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.578134   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:32.578141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:32.578200   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:32.615501   70480 cri.go:89] found id: ""
	I0729 11:51:32.615527   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.615539   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:32.615547   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:32.615597   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:32.653232   70480 cri.go:89] found id: ""
	I0729 11:51:32.653262   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.653272   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:32.653279   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:32.653338   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:32.691385   70480 cri.go:89] found id: ""
	I0729 11:51:32.691408   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.691418   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:32.691440   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:32.691486   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:32.727625   70480 cri.go:89] found id: ""
	I0729 11:51:32.727657   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.727667   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:32.727674   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:32.727737   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:32.762200   70480 cri.go:89] found id: ""
	I0729 11:51:32.762225   70480 logs.go:276] 0 containers: []
	W0729 11:51:32.762232   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:32.762240   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:32.762251   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:32.815020   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:32.815061   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:32.830072   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:32.830100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:32.902248   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:32.902278   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:32.902292   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:32.984571   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:32.984604   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:34.513960   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.514684   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.515512   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:36.680403   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.681176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.979834   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:38.482035   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:35.528850   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:35.543568   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:35.543633   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:35.583537   70480 cri.go:89] found id: ""
	I0729 11:51:35.583564   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.583571   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:35.583578   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:35.583632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:35.621997   70480 cri.go:89] found id: ""
	I0729 11:51:35.622021   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.622028   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:35.622034   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:35.622090   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:35.659062   70480 cri.go:89] found id: ""
	I0729 11:51:35.659091   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.659102   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:35.659109   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:35.659169   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:35.696644   70480 cri.go:89] found id: ""
	I0729 11:51:35.696679   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.696689   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:35.696701   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:35.696753   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:35.734321   70480 cri.go:89] found id: ""
	I0729 11:51:35.734348   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.734358   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:35.734366   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:35.734426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:35.771540   70480 cri.go:89] found id: ""
	I0729 11:51:35.771574   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.771586   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:35.771604   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:35.771684   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:35.811293   70480 cri.go:89] found id: ""
	I0729 11:51:35.811318   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.811326   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:35.811332   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:35.811386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:35.857175   70480 cri.go:89] found id: ""
	I0729 11:51:35.857206   70480 logs.go:276] 0 containers: []
	W0729 11:51:35.857217   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:35.857228   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:35.857242   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:35.946191   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:35.946210   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:35.946225   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:36.033466   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:36.033501   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:36.072593   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:36.072622   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:36.129193   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:36.129228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:38.645464   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:38.659318   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:38.659378   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:38.697259   70480 cri.go:89] found id: ""
	I0729 11:51:38.697289   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.697298   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:38.697304   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:38.697351   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:38.736722   70480 cri.go:89] found id: ""
	I0729 11:51:38.736751   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.736763   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:38.736770   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:38.736828   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:38.773508   70480 cri.go:89] found id: ""
	I0729 11:51:38.773532   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.773539   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:38.773545   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:38.773607   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:38.813147   70480 cri.go:89] found id: ""
	I0729 11:51:38.813177   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.813186   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:38.813193   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:38.813249   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:38.850588   70480 cri.go:89] found id: ""
	I0729 11:51:38.850621   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.850631   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:38.850639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:38.850694   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:38.888260   70480 cri.go:89] found id: ""
	I0729 11:51:38.888293   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.888304   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:38.888313   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:38.888380   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:38.924326   70480 cri.go:89] found id: ""
	I0729 11:51:38.924352   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.924360   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:38.924365   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:38.924426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:38.963356   70480 cri.go:89] found id: ""
	I0729 11:51:38.963386   70480 logs.go:276] 0 containers: []
	W0729 11:51:38.963397   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:38.963408   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:38.963425   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:39.048438   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:39.048472   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:39.087799   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:39.087828   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:39.141908   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:39.141945   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:39.156242   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:39.156282   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:39.231689   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:41.014799   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.015914   69907 pod_ready.go:102] pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.180241   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.180737   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:40.980126   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:43.480593   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:41.732860   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:41.752371   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:41.752451   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:41.804525   70480 cri.go:89] found id: ""
	I0729 11:51:41.804553   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.804565   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:41.804575   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:41.804632   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:41.859983   70480 cri.go:89] found id: ""
	I0729 11:51:41.860010   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.860018   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:41.860024   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:41.860081   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:41.909592   70480 cri.go:89] found id: ""
	I0729 11:51:41.909622   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.909632   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:41.909639   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:41.909700   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:41.949892   70480 cri.go:89] found id: ""
	I0729 11:51:41.949919   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.949928   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:41.949933   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:41.950011   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:41.985334   70480 cri.go:89] found id: ""
	I0729 11:51:41.985360   70480 logs.go:276] 0 containers: []
	W0729 11:51:41.985368   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:41.985374   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:41.985426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:42.028747   70480 cri.go:89] found id: ""
	I0729 11:51:42.028806   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.028818   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:42.028829   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:42.028899   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:42.063927   70480 cri.go:89] found id: ""
	I0729 11:51:42.063955   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.063965   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:42.063972   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:42.064031   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:42.099539   70480 cri.go:89] found id: ""
	I0729 11:51:42.099566   70480 logs.go:276] 0 containers: []
	W0729 11:51:42.099577   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:42.099587   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:42.099602   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:42.112852   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:42.112879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:42.185104   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:42.185130   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:42.185142   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:42.265744   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:42.265778   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:42.309451   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:42.309478   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.862271   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:44.876763   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:44.876832   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:44.921157   70480 cri.go:89] found id: ""
	I0729 11:51:44.921188   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.921198   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:44.921206   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:44.921266   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:44.960222   70480 cri.go:89] found id: ""
	I0729 11:51:44.960255   70480 logs.go:276] 0 containers: []
	W0729 11:51:44.960265   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:44.960272   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:44.960334   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:44.999994   70480 cri.go:89] found id: ""
	I0729 11:51:45.000025   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.000036   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:45.000045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:45.000108   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:45.037110   70480 cri.go:89] found id: ""
	I0729 11:51:45.037145   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.037156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:45.037163   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:45.037215   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:45.074406   70480 cri.go:89] found id: ""
	I0729 11:51:45.074430   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.074440   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:45.074447   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:45.074508   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:45.113069   70480 cri.go:89] found id: ""
	I0729 11:51:45.113097   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.113107   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:45.113115   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:45.113178   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:45.158016   70480 cri.go:89] found id: ""
	I0729 11:51:45.158045   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.158055   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:45.158063   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:45.158115   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:45.199257   70480 cri.go:89] found id: ""
	I0729 11:51:45.199286   70480 logs.go:276] 0 containers: []
	W0729 11:51:45.199294   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:45.199303   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:45.199314   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:44.509117   69907 pod_ready.go:81] duration metric: took 4m0.000903528s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" ...
	E0729 11:51:44.509148   69907 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-vqgtm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:51:44.509164   69907 pod_ready.go:38] duration metric: took 4m6.540840543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:51:44.509191   69907 kubeadm.go:597] duration metric: took 4m16.180899614s to restartPrimaryControlPlane
	W0729 11:51:44.509250   69907 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:51:44.509278   69907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:51:45.181697   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.682106   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.979275   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:47.979316   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:45.254060   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:45.254100   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:45.269592   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:45.269630   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:45.357469   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:45.357493   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:45.357508   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:45.437760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:45.437806   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:47.980407   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:47.998789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:47.998874   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:48.039278   70480 cri.go:89] found id: ""
	I0729 11:51:48.039311   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.039321   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:48.039328   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:48.039391   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:48.080277   70480 cri.go:89] found id: ""
	I0729 11:51:48.080312   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.080324   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:48.080332   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:48.080395   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:48.119009   70480 cri.go:89] found id: ""
	I0729 11:51:48.119032   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.119039   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:48.119045   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:48.119091   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:48.162062   70480 cri.go:89] found id: ""
	I0729 11:51:48.162091   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.162101   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:48.162108   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:48.162175   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:48.208089   70480 cri.go:89] found id: ""
	I0729 11:51:48.208120   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.208141   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:48.208148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:48.208214   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:48.254174   70480 cri.go:89] found id: ""
	I0729 11:51:48.254206   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.254217   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:48.254225   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:48.254288   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:48.294959   70480 cri.go:89] found id: ""
	I0729 11:51:48.294988   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.294998   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:48.295005   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:48.295067   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:48.337176   70480 cri.go:89] found id: ""
	I0729 11:51:48.337209   70480 logs.go:276] 0 containers: []
	W0729 11:51:48.337221   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:48.337231   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:48.337249   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:48.392826   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:48.392879   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:48.409017   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:48.409043   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:48.489964   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:48.489995   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:48.490012   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:48.571448   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:48.571496   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:50.180914   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.181136   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:50.479880   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:52.977753   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:54.978456   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:51.124524   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:51.137808   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:51.137887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:51.178622   70480 cri.go:89] found id: ""
	I0729 11:51:51.178647   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.178656   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:51.178663   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:51.178738   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:51.214985   70480 cri.go:89] found id: ""
	I0729 11:51:51.215008   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.215015   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:51.215021   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:51.215071   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:51.250529   70480 cri.go:89] found id: ""
	I0729 11:51:51.250575   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.250586   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:51.250594   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:51.250648   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:51.284745   70480 cri.go:89] found id: ""
	I0729 11:51:51.284774   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.284781   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:51.284787   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:51.284844   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:51.319448   70480 cri.go:89] found id: ""
	I0729 11:51:51.319476   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.319486   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:51.319494   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:51.319559   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:51.358828   70480 cri.go:89] found id: ""
	I0729 11:51:51.358861   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.358868   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:51.358875   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:51.358934   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:51.398326   70480 cri.go:89] found id: ""
	I0729 11:51:51.398356   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.398363   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:51.398369   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:51.398424   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:51.450495   70480 cri.go:89] found id: ""
	I0729 11:51:51.450523   70480 logs.go:276] 0 containers: []
	W0729 11:51:51.450530   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:51.450539   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:51.450549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:51.495351   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:51.495381   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:51.545937   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:51.545972   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:51.560738   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:51.560769   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:51.640187   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:51.640209   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:51.640223   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.230801   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:54.244231   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:54.244307   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:54.279319   70480 cri.go:89] found id: ""
	I0729 11:51:54.279349   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.279359   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:54.279366   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:54.279426   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:54.316648   70480 cri.go:89] found id: ""
	I0729 11:51:54.316675   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.316685   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:54.316691   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:54.316751   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:54.353605   70480 cri.go:89] found id: ""
	I0729 11:51:54.353631   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.353641   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:54.353646   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:54.353705   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:54.389695   70480 cri.go:89] found id: ""
	I0729 11:51:54.389724   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.389734   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:54.389739   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:54.389789   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:54.425572   70480 cri.go:89] found id: ""
	I0729 11:51:54.425610   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.425620   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:54.425627   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:54.425693   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:54.466041   70480 cri.go:89] found id: ""
	I0729 11:51:54.466084   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.466101   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:54.466111   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:54.466160   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:54.504916   70480 cri.go:89] found id: ""
	I0729 11:51:54.504943   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.504950   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:54.504956   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:54.505003   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:54.542093   70480 cri.go:89] found id: ""
	I0729 11:51:54.542133   70480 logs.go:276] 0 containers: []
	W0729 11:51:54.542141   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:54.542149   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:54.542161   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:54.555521   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:54.555549   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:54.639080   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:54.639104   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:54.639115   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:54.721817   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:54.721858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:54.760794   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:54.760831   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:51:54.681184   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.179812   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.180919   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:56.978928   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:59.479018   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:51:57.314549   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:51:57.328865   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:51:57.328941   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:51:57.364546   70480 cri.go:89] found id: ""
	I0729 11:51:57.364577   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.364587   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:51:57.364594   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:51:57.364665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:51:57.401965   70480 cri.go:89] found id: ""
	I0729 11:51:57.401994   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.402005   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:51:57.402013   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:51:57.402072   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:51:57.436918   70480 cri.go:89] found id: ""
	I0729 11:51:57.436942   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.436975   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:51:57.436983   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:51:57.437042   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:51:57.471476   70480 cri.go:89] found id: ""
	I0729 11:51:57.471503   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.471511   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:51:57.471519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:51:57.471576   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:51:57.509934   70480 cri.go:89] found id: ""
	I0729 11:51:57.509962   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.509972   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:51:57.509980   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:51:57.510038   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:51:57.552095   70480 cri.go:89] found id: ""
	I0729 11:51:57.552123   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.552133   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:51:57.552141   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:51:57.552204   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:51:57.586486   70480 cri.go:89] found id: ""
	I0729 11:51:57.586507   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.586514   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:51:57.586519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:51:57.586580   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:51:57.622708   70480 cri.go:89] found id: ""
	I0729 11:51:57.622737   70480 logs.go:276] 0 containers: []
	W0729 11:51:57.622746   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:51:57.622757   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:51:57.622771   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:51:57.637102   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:51:57.637133   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:51:57.710960   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:51:57.710981   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:51:57.710994   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:51:57.803522   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:51:57.803559   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:51:57.845804   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:51:57.845838   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:01.680142   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.682844   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:01.978739   70231 pod_ready.go:102] pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:03.973441   70231 pod_ready.go:81] duration metric: took 4m0.000922355s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:03.973469   70231 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v94xq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:03.973488   70231 pod_ready.go:38] duration metric: took 4m6.983171556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:03.973523   70231 kubeadm.go:597] duration metric: took 4m14.830269847s to restartPrimaryControlPlane
	W0729 11:52:03.973614   70231 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:03.973646   70231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:00.398227   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:00.412064   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:00.412139   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:00.446661   70480 cri.go:89] found id: ""
	I0729 11:52:00.446716   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.446729   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:00.446741   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:00.446793   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:00.482234   70480 cri.go:89] found id: ""
	I0729 11:52:00.482260   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.482270   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:00.482290   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:00.482357   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:00.520087   70480 cri.go:89] found id: ""
	I0729 11:52:00.520125   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.520136   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:00.520143   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:00.520203   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:00.556889   70480 cri.go:89] found id: ""
	I0729 11:52:00.556913   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.556924   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:00.556931   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:00.556996   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:00.593521   70480 cri.go:89] found id: ""
	I0729 11:52:00.593559   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.593569   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:00.593577   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:00.593644   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:00.631849   70480 cri.go:89] found id: ""
	I0729 11:52:00.631879   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.631889   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:00.631897   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:00.631956   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:00.669206   70480 cri.go:89] found id: ""
	I0729 11:52:00.669235   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.669246   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:00.669254   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:00.669314   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:00.707653   70480 cri.go:89] found id: ""
	I0729 11:52:00.707681   70480 logs.go:276] 0 containers: []
	W0729 11:52:00.707692   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:00.707702   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:00.707720   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:00.722275   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:00.722305   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:00.796212   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:00.796240   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:00.796254   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:00.882088   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:00.882135   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:00.924217   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:00.924248   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.479929   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:03.493148   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:03.493211   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:03.528114   70480 cri.go:89] found id: ""
	I0729 11:52:03.528158   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.528169   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:03.528177   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:03.528233   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:03.564509   70480 cri.go:89] found id: ""
	I0729 11:52:03.564542   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.564552   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:03.564559   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:03.564628   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:03.599850   70480 cri.go:89] found id: ""
	I0729 11:52:03.599884   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.599897   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:03.599913   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:03.599977   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:03.638988   70480 cri.go:89] found id: ""
	I0729 11:52:03.639020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.639031   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:03.639039   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:03.639112   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:03.678544   70480 cri.go:89] found id: ""
	I0729 11:52:03.678572   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.678581   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:03.678589   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:03.678651   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:03.717265   70480 cri.go:89] found id: ""
	I0729 11:52:03.717297   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.717307   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:03.717314   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:03.717379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:03.756479   70480 cri.go:89] found id: ""
	I0729 11:52:03.756504   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.756512   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:03.756519   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:03.756570   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:03.791992   70480 cri.go:89] found id: ""
	I0729 11:52:03.792020   70480 logs.go:276] 0 containers: []
	W0729 11:52:03.792031   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:03.792042   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:03.792056   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:03.881378   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:03.881417   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:03.918866   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:03.918902   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:03.972005   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:03.972041   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:03.989650   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:03.989680   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:04.069522   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:06.182277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:08.681543   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:06.570567   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:06.588524   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:06.588594   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:06.627028   70480 cri.go:89] found id: ""
	I0729 11:52:06.627071   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.627082   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:06.627089   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:06.627154   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:06.669504   70480 cri.go:89] found id: ""
	I0729 11:52:06.669536   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.669548   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:06.669558   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:06.669620   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:06.709546   70480 cri.go:89] found id: ""
	I0729 11:52:06.709573   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.709593   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:06.709603   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:06.709661   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:06.745539   70480 cri.go:89] found id: ""
	I0729 11:52:06.745568   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.745577   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:06.745585   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:06.745645   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:06.783924   70480 cri.go:89] found id: ""
	I0729 11:52:06.783960   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.783971   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:06.783978   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:06.784040   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:06.819838   70480 cri.go:89] found id: ""
	I0729 11:52:06.819869   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.819879   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:06.819886   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:06.819950   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:06.858057   70480 cri.go:89] found id: ""
	I0729 11:52:06.858085   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.858102   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:06.858110   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:06.858186   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:06.896844   70480 cri.go:89] found id: ""
	I0729 11:52:06.896875   70480 logs.go:276] 0 containers: []
	W0729 11:52:06.896885   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:06.896896   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:06.896911   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:06.953126   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:06.953166   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:06.967548   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:06.967579   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:07.046716   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:07.046740   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:07.046756   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:07.129264   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:07.129299   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:09.672314   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:09.687133   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:09.687202   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:09.728280   70480 cri.go:89] found id: ""
	I0729 11:52:09.728307   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.728316   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:09.728322   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:09.728379   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:09.765178   70480 cri.go:89] found id: ""
	I0729 11:52:09.765214   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.765225   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:09.765233   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:09.765293   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:09.801181   70480 cri.go:89] found id: ""
	I0729 11:52:09.801216   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.801225   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:09.801233   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:09.801294   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:09.845088   70480 cri.go:89] found id: ""
	I0729 11:52:09.845118   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.845129   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:09.845137   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:09.845198   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:09.883874   70480 cri.go:89] found id: ""
	I0729 11:52:09.883907   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.883918   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:09.883925   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:09.883992   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:09.918273   70480 cri.go:89] found id: ""
	I0729 11:52:09.918302   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.918312   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:09.918321   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:09.918386   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:09.978461   70480 cri.go:89] found id: ""
	I0729 11:52:09.978487   70480 logs.go:276] 0 containers: []
	W0729 11:52:09.978494   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:09.978500   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:09.978546   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:10.022219   70480 cri.go:89] found id: ""
	I0729 11:52:10.022247   70480 logs.go:276] 0 containers: []
	W0729 11:52:10.022255   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:10.022264   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:10.022274   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:10.076181   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:10.076228   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:10.090567   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:10.090600   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:10.159576   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:10.159605   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:10.159620   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:11.181276   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:13.181424   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:10.242116   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:10.242165   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:12.784286   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:12.798015   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:52:12.798111   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:52:12.834920   70480 cri.go:89] found id: ""
	I0729 11:52:12.834951   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.834962   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:52:12.834969   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:52:12.835030   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:52:12.876545   70480 cri.go:89] found id: ""
	I0729 11:52:12.876578   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.876589   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:52:12.876596   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:52:12.876665   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:52:12.912912   70480 cri.go:89] found id: ""
	I0729 11:52:12.912937   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.912944   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:52:12.912950   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:52:12.913006   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:52:12.948118   70480 cri.go:89] found id: ""
	I0729 11:52:12.948148   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.948156   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:52:12.948161   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:52:12.948207   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:52:12.984339   70480 cri.go:89] found id: ""
	I0729 11:52:12.984364   70480 logs.go:276] 0 containers: []
	W0729 11:52:12.984371   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:52:12.984377   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:52:12.984433   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:52:13.024870   70480 cri.go:89] found id: ""
	I0729 11:52:13.024906   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.024916   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:52:13.024924   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:52:13.024989   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:52:13.067951   70480 cri.go:89] found id: ""
	I0729 11:52:13.067988   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.067999   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:52:13.068007   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:52:13.068068   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:52:13.105095   70480 cri.go:89] found id: ""
	I0729 11:52:13.105126   70480 logs.go:276] 0 containers: []
	W0729 11:52:13.105136   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:52:13.105144   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:52:13.105158   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:52:13.167486   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:52:13.167537   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:52:13.183020   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:52:13.183046   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:52:13.264702   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:52:13.264734   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:52:13.264750   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:52:13.346760   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:52:13.346795   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:52:15.887849   70480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:15.901560   70480 kubeadm.go:597] duration metric: took 4m4.683363388s to restartPrimaryControlPlane
	W0729 11:52:15.901628   70480 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:15.901659   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:16.377916   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.396247   70480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.408224   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.420998   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.421024   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.421073   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.432646   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.432712   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.444442   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.455098   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.455175   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.469621   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.482797   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.482869   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.495535   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.508873   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.508967   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.519797   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.590877   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:52:16.590933   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.768006   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.768167   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.768313   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:16.980586   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:16.523230   69907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.013927797s)
	I0729 11:52:16.523296   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:16.541674   69907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:16.553585   69907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:16.565171   69907 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:16.565196   69907 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:16.565237   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:52:16.575919   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:16.576023   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:16.588641   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:52:16.599947   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:16.600028   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:16.612623   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.624420   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:16.624486   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:16.639271   69907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:52:16.649979   69907 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:16.650057   69907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:16.661423   69907 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:16.718013   69907 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:16.718138   69907 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:16.870793   69907 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:16.870955   69907 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:16.871090   69907 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:17.100094   69907 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:17.101792   69907 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:17.101895   69907 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:17.101999   69907 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:17.102129   69907 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:17.102237   69907 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:17.102339   69907 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:17.102419   69907 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:17.102523   69907 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:17.102607   69907 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:17.102731   69907 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:17.103613   69907 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:17.103841   69907 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:17.103923   69907 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.439592   69907 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.517503   69907 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:17.731672   69907 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.877789   69907 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.930274   69907 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.930777   69907 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:17.933362   69907 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:16.982617   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:16.982732   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:16.982826   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:16.982935   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:16.983015   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:16.983079   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:16.983127   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:16.983180   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:16.983230   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:16.983291   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:16.983354   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:16.983386   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:16.983433   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:17.523710   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:17.622636   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:17.732508   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:17.869921   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:17.892581   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.893213   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.893300   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.049043   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:17.935629   69907 out.go:204]   - Booting up control plane ...
	I0729 11:52:17.935753   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:17.935870   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:17.935955   69907 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:17.961756   69907 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:17.962814   69907 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:17.962879   69907 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:18.102662   69907 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:18.102806   69907 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:15.181970   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:17.682108   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:18.051012   70480 out.go:204]   - Booting up control plane ...
	I0729 11:52:18.051192   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:18.058194   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:18.062340   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:18.063509   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:18.066121   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:52:19.116356   69907 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.010567801s
	I0729 11:52:19.116461   69907 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:24.118059   69907 kubeadm.go:310] [api-check] The API server is healthy after 5.002510977s
	I0729 11:52:24.132586   69907 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:24.148251   69907 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:24.188769   69907 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:24.188956   69907 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-731235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:24.205790   69907 kubeadm.go:310] [bootstrap-token] Using token: pvm7ux.41geojc66jibd993
	I0729 11:52:20.181703   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:22.181889   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.182317   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:24.207334   69907 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:24.207519   69907 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:24.213637   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:24.226771   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:24.231379   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:24.239349   69907 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:24.248803   69907 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:24.524966   69907 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:24.961557   69907 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:25.522876   69907 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:25.523985   69907 kubeadm.go:310] 
	I0729 11:52:25.524083   69907 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:25.524093   69907 kubeadm.go:310] 
	I0729 11:52:25.524203   69907 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:25.524234   69907 kubeadm.go:310] 
	I0729 11:52:25.524273   69907 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:25.524353   69907 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:25.524441   69907 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:25.524460   69907 kubeadm.go:310] 
	I0729 11:52:25.524520   69907 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:25.524527   69907 kubeadm.go:310] 
	I0729 11:52:25.524578   69907 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:25.524584   69907 kubeadm.go:310] 
	I0729 11:52:25.524625   69907 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:25.524728   69907 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:25.524834   69907 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:25.524843   69907 kubeadm.go:310] 
	I0729 11:52:25.524957   69907 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:25.525047   69907 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:25.525054   69907 kubeadm.go:310] 
	I0729 11:52:25.525175   69907 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525314   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:25.525343   69907 kubeadm.go:310] 	--control-plane 
	I0729 11:52:25.525351   69907 kubeadm.go:310] 
	I0729 11:52:25.525449   69907 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:25.525463   69907 kubeadm.go:310] 
	I0729 11:52:25.525569   69907 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pvm7ux.41geojc66jibd993 \
	I0729 11:52:25.525709   69907 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:25.526283   69907 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:25.526361   69907 cni.go:84] Creating CNI manager for ""
	I0729 11:52:25.526378   69907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:25.528362   69907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:25.529726   69907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:25.546760   69907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:25.571336   69907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:25.571457   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-731235 minikube.k8s.io/updated_at=2024_07_29T11_52_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=embed-certs-731235 minikube.k8s.io/primary=true
	I0729 11:52:25.571460   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:25.600643   69907 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:25.771231   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.271938   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.771337   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.271880   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:27.772276   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.271327   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:28.771854   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:26.680959   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.180277   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:29.271904   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:29.771958   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.271342   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:30.771316   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.271539   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.771490   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.271537   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:32.771969   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.271498   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:33.771963   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:31.681002   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.180450   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:34.271709   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:34.771968   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.271985   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:35.771798   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.271877   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:36.771950   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.271225   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:37.771622   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.271354   69907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:38.369678   69907 kubeadm.go:1113] duration metric: took 12.798280829s to wait for elevateKubeSystemPrivileges
	I0729 11:52:38.369716   69907 kubeadm.go:394] duration metric: took 5m10.090728575s to StartCluster
	I0729 11:52:38.369737   69907 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.369812   69907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:38.371527   69907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:38.371774   69907 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:38.371829   69907 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:38.371904   69907 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-731235"
	I0729 11:52:38.371925   69907 addons.go:69] Setting default-storageclass=true in profile "embed-certs-731235"
	I0729 11:52:38.371956   69907 addons.go:69] Setting metrics-server=true in profile "embed-certs-731235"
	I0729 11:52:38.371977   69907 config.go:182] Loaded profile config "embed-certs-731235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:38.371991   69907 addons.go:234] Setting addon metrics-server=true in "embed-certs-731235"
	W0729 11:52:38.371999   69907 addons.go:243] addon metrics-server should already be in state true
	I0729 11:52:38.372041   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.371966   69907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-731235"
	I0729 11:52:38.371936   69907 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-731235"
	W0729 11:52:38.372204   69907 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:38.372240   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.372365   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372402   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372460   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.372615   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.372662   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.373455   69907 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:38.374757   69907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:38.388333   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0729 11:52:38.388901   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.389443   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.389467   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.389661   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0729 11:52:38.389858   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.390469   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.390499   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.390717   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.391258   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.391278   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.391622   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0729 11:52:38.391655   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.391937   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.391966   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.392511   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.392538   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.392904   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.393459   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.393491   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.395933   69907 addons.go:234] Setting addon default-storageclass=true in "embed-certs-731235"
	W0729 11:52:38.395953   69907 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:38.395980   69907 host.go:66] Checking if "embed-certs-731235" exists ...
	I0729 11:52:38.396342   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.396371   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.411784   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0729 11:52:38.412254   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.412549   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34211
	I0729 11:52:38.412811   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.412831   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.412911   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.413173   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413340   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.413470   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.413488   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.413830   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.413997   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.414897   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0729 11:52:38.415312   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.415395   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.415753   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.415772   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.415918   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.416126   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.416663   69907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:38.416690   69907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:38.418043   69907 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:38.418047   69907 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:38.419620   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:38.419640   69907 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:38.419661   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.419693   69907 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:38.419702   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:38.419714   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.423646   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424115   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424184   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424208   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424370   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.424573   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.424631   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.424647   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.424722   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.424821   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.425101   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.425266   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.425394   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.425528   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.432777   69907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0729 11:52:38.433219   69907 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:38.433735   69907 main.go:141] libmachine: Using API Version  1
	I0729 11:52:38.433759   69907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:38.434121   69907 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:38.434299   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetState
	I0729 11:52:38.435957   69907 main.go:141] libmachine: (embed-certs-731235) Calling .DriverName
	I0729 11:52:38.436176   69907 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.436195   69907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:38.436216   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHHostname
	I0729 11:52:38.438989   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439431   69907 main.go:141] libmachine: (embed-certs-731235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:bd:81", ip: ""} in network mk-embed-certs-731235: {Iface:virbr2 ExpiryTime:2024-07-29 12:47:13 +0000 UTC Type:0 Mac:52:54:00:8a:bd:81 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-731235 Clientid:01:52:54:00:8a:bd:81}
	I0729 11:52:38.439508   69907 main.go:141] libmachine: (embed-certs-731235) DBG | domain embed-certs-731235 has defined IP address 192.168.61.202 and MAC address 52:54:00:8a:bd:81 in network mk-embed-certs-731235
	I0729 11:52:38.439627   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHPort
	I0729 11:52:38.439783   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHKeyPath
	I0729 11:52:38.439929   69907 main.go:141] libmachine: (embed-certs-731235) Calling .GetSSHUsername
	I0729 11:52:38.440077   69907 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/embed-certs-731235/id_rsa Username:docker}
	I0729 11:52:38.598513   69907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:38.627199   69907 node_ready.go:35] waiting up to 6m0s for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639168   69907 node_ready.go:49] node "embed-certs-731235" has status "Ready":"True"
	I0729 11:52:38.639199   69907 node_ready.go:38] duration metric: took 11.953793ms for node "embed-certs-731235" to be "Ready" ...
	I0729 11:52:38.639208   69907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:38.644562   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:38.678019   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:38.678042   69907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:38.706214   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:52:38.706247   69907 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:52:38.745796   69907 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.745824   69907 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:38.767879   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:52:38.778016   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:38.790742   69907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:36.181329   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:38.183254   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:39.974095   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196041477s)
	I0729 11:52:39.974096   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206172307s)
	I0729 11:52:39.974194   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974247   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974203   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974345   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974811   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974831   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974840   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974847   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.974857   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.974925   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.974938   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.974946   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.974955   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:39.975075   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.975165   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.975244   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976561   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:39.976579   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:39.976577   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:39.976589   69907 addons.go:475] Verifying addon metrics-server=true in "embed-certs-731235"
	I0729 11:52:39.999773   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:39.999799   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.000097   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.000118   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.026995   69907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.236214166s)
	I0729 11:52:40.027052   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027063   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027383   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.027402   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.027412   69907 main.go:141] libmachine: Making call to close driver server
	I0729 11:52:40.027422   69907 main.go:141] libmachine: (embed-certs-731235) Calling .Close
	I0729 11:52:40.027387   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029105   69907 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:52:40.029109   69907 main.go:141] libmachine: (embed-certs-731235) DBG | Closing plugin on server side
	I0729 11:52:40.029124   69907 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:52:40.031066   69907 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner
	I0729 11:52:36.127977   70231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.15430735s)
	I0729 11:52:36.128057   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:36.147540   70231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:52:36.159519   70231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:52:36.171332   70231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:52:36.171353   70231 kubeadm.go:157] found existing configuration files:
	
	I0729 11:52:36.171406   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 11:52:36.182915   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:52:36.183084   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:52:36.193912   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 11:52:36.203972   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:52:36.204036   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:52:36.213886   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.223205   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:52:36.223260   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:52:36.235379   70231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 11:52:36.245392   70231 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:52:36.245461   70231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:52:36.255495   70231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:52:36.468759   70231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:52:40.032797   69907 addons.go:510] duration metric: took 1.660964221s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner]
	I0729 11:52:40.654126   69907 pod_ready.go:102] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:41.173676   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.173708   69907 pod_ready.go:81] duration metric: took 2.529122203s for pod "coredns-7db6d8ff4d-6md2j" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.173721   69907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183179   69907 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.183207   69907 pod_ready.go:81] duration metric: took 9.478224ms for pod "coredns-7db6d8ff4d-rlhzt" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.183220   69907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192149   69907 pod_ready.go:92] pod "etcd-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.192177   69907 pod_ready.go:81] duration metric: took 8.949045ms for pod "etcd-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.192189   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199322   69907 pod_ready.go:92] pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.199347   69907 pod_ready.go:81] duration metric: took 7.150124ms for pod "kube-apiserver-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.199360   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210464   69907 pod_ready.go:92] pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.210491   69907 pod_ready.go:81] duration metric: took 11.123649ms for pod "kube-controller-manager-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.210504   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549786   69907 pod_ready.go:92] pod "kube-proxy-ch48n" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.549814   69907 pod_ready.go:81] duration metric: took 339.30332ms for pod "kube-proxy-ch48n" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.549828   69907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949607   69907 pod_ready.go:92] pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace has status "Ready":"True"
	I0729 11:52:41.949629   69907 pod_ready.go:81] duration metric: took 399.794484ms for pod "kube-scheduler-embed-certs-731235" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:41.949637   69907 pod_ready.go:38] duration metric: took 3.310420523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:41.949650   69907 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:52:41.949732   69907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:52:41.967899   69907 api_server.go:72] duration metric: took 3.596093405s to wait for apiserver process to appear ...
	I0729 11:52:41.967933   69907 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:52:41.967957   69907 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0729 11:52:41.973064   69907 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0729 11:52:41.974128   69907 api_server.go:141] control plane version: v1.30.3
	I0729 11:52:41.974151   69907 api_server.go:131] duration metric: took 6.211514ms to wait for apiserver health ...
	I0729 11:52:41.974158   69907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:52:42.152607   69907 system_pods.go:59] 9 kube-system pods found
	I0729 11:52:42.152648   69907 system_pods.go:61] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.152656   69907 system_pods.go:61] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.152663   69907 system_pods.go:61] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.152670   69907 system_pods.go:61] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.152674   69907 system_pods.go:61] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.152680   69907 system_pods.go:61] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.152685   69907 system_pods.go:61] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.152694   69907 system_pods.go:61] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.152702   69907 system_pods.go:61] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.152714   69907 system_pods.go:74] duration metric: took 178.548453ms to wait for pod list to return data ...
	I0729 11:52:42.152728   69907 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:52:42.349148   69907 default_sa.go:45] found service account: "default"
	I0729 11:52:42.349182   69907 default_sa.go:55] duration metric: took 196.446704ms for default service account to be created ...
	I0729 11:52:42.349192   69907 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:52:42.552384   69907 system_pods.go:86] 9 kube-system pods found
	I0729 11:52:42.552416   69907 system_pods.go:89] "coredns-7db6d8ff4d-6md2j" [37472eb3-a941-4ff9-a0af-0ce42d604318] Running
	I0729 11:52:42.552425   69907 system_pods.go:89] "coredns-7db6d8ff4d-rlhzt" [298c2d3b-8a1e-4146-987a-f9c1eff6f92c] Running
	I0729 11:52:42.552431   69907 system_pods.go:89] "etcd-embed-certs-731235" [e31cd23f-d730-410d-a748-f571086a6836] Running
	I0729 11:52:42.552437   69907 system_pods.go:89] "kube-apiserver-embed-certs-731235" [c5da2010-af3d-40c2-a018-7df36c6574ba] Running
	I0729 11:52:42.552442   69907 system_pods.go:89] "kube-controller-manager-embed-certs-731235" [ac906e27-abf6-4577-af26-6f86ae6796ec] Running
	I0729 11:52:42.552448   69907 system_pods.go:89] "kube-proxy-ch48n" [68896b36-6aa0-4dcc-ad3a-74573aa1c3ec] Running
	I0729 11:52:42.552453   69907 system_pods.go:89] "kube-scheduler-embed-certs-731235" [497ae4f9-97c2-4029-8225-2f4f186c958c] Running
	I0729 11:52:42.552462   69907 system_pods.go:89] "metrics-server-569cc877fc-gxczz" [096f1de4-e064-42bc-8a16-aa08320addb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:52:42.552472   69907 system_pods.go:89] "storage-provisioner" [2fea7bc2-554e-4fe9-b2af-c4e340e85c18] Running
	I0729 11:52:42.552483   69907 system_pods.go:126] duration metric: took 203.284903ms to wait for k8s-apps to be running ...
	I0729 11:52:42.552492   69907 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:52:42.552546   69907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:52:42.569158   69907 system_svc.go:56] duration metric: took 16.657226ms WaitForService to wait for kubelet
	I0729 11:52:42.569186   69907 kubeadm.go:582] duration metric: took 4.19738713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:52:42.569205   69907 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:52:42.749356   69907 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:52:42.749385   69907 node_conditions.go:123] node cpu capacity is 2
	I0729 11:52:42.749399   69907 node_conditions.go:105] duration metric: took 180.189313ms to run NodePressure ...
	I0729 11:52:42.749411   69907 start.go:241] waiting for startup goroutines ...
	I0729 11:52:42.749417   69907 start.go:246] waiting for cluster config update ...
	I0729 11:52:42.749427   69907 start.go:255] writing updated cluster config ...
	I0729 11:52:42.749672   69907 ssh_runner.go:195] Run: rm -f paused
	I0729 11:52:42.807579   69907 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:52:42.809609   69907 out.go:177] * Done! kubectl is now configured to use "embed-certs-731235" cluster and "default" namespace by default
	I0729 11:52:40.681693   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:42.685146   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.646240   70231 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:52:46.646305   70231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:52:46.646407   70231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:52:46.646537   70231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:52:46.646653   70231 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:52:46.646749   70231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:52:46.648483   70231 out.go:204]   - Generating certificates and keys ...
	I0729 11:52:46.648572   70231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:52:46.648626   70231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:52:46.648719   70231 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:52:46.648820   70231 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:52:46.648941   70231 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:52:46.649013   70231 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:52:46.649068   70231 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:52:46.649121   70231 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:52:46.649182   70231 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:52:46.649248   70231 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:52:46.649294   70231 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:52:46.649378   70231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:52:46.649455   70231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:52:46.649529   70231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:52:46.649609   70231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:52:46.649693   70231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:52:46.649778   70231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:52:46.649912   70231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:52:46.650023   70231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:52:46.651575   70231 out.go:204]   - Booting up control plane ...
	I0729 11:52:46.651657   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:52:46.651723   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:52:46.651793   70231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:52:46.651893   70231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:52:46.651963   70231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:52:46.651996   70231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:52:46.652155   70231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:52:46.652258   70231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:52:46.652315   70231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00230111s
	I0729 11:52:46.652381   70231 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:52:46.652444   70231 kubeadm.go:310] [api-check] The API server is healthy after 5.502783682s
	I0729 11:52:46.652588   70231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:52:46.652734   70231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:52:46.652802   70231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:52:46.652991   70231 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-754486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:52:46.653041   70231 kubeadm.go:310] [bootstrap-token] Using token: 341fdm.tm8thttie16wi2qy
	I0729 11:52:46.654343   70231 out.go:204]   - Configuring RBAC rules ...
	I0729 11:52:46.654458   70231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:52:46.654555   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:52:46.654745   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:52:46.654914   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:52:46.655023   70231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:52:46.655094   70231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:52:46.655202   70231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:52:46.655242   70231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:52:46.655285   70231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:52:46.655293   70231 kubeadm.go:310] 
	I0729 11:52:46.655349   70231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:52:46.655355   70231 kubeadm.go:310] 
	I0729 11:52:46.655427   70231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:52:46.655433   70231 kubeadm.go:310] 
	I0729 11:52:46.655453   70231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:52:46.655509   70231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:52:46.655576   70231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:52:46.655586   70231 kubeadm.go:310] 
	I0729 11:52:46.655653   70231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:52:46.655660   70231 kubeadm.go:310] 
	I0729 11:52:46.655702   70231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:52:46.655708   70231 kubeadm.go:310] 
	I0729 11:52:46.655772   70231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:52:46.655861   70231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:52:46.655975   70231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:52:46.656000   70231 kubeadm.go:310] 
	I0729 11:52:46.656118   70231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:52:46.656223   70231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:52:46.656233   70231 kubeadm.go:310] 
	I0729 11:52:46.656344   70231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656477   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:52:46.656502   70231 kubeadm.go:310] 	--control-plane 
	I0729 11:52:46.656508   70231 kubeadm.go:310] 
	I0729 11:52:46.656580   70231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:52:46.656586   70231 kubeadm.go:310] 
	I0729 11:52:46.656669   70231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 341fdm.tm8thttie16wi2qy \
	I0729 11:52:46.656831   70231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:52:46.656851   70231 cni.go:84] Creating CNI manager for ""
	I0729 11:52:46.656862   70231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:52:46.659007   70231 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:52:45.180215   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:47.181213   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:46.660238   70231 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:52:46.671866   70231 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:52:46.692991   70231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:46.693063   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-754486 minikube.k8s.io/updated_at=2024_07_29T11_52_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=default-k8s-diff-port-754486 minikube.k8s.io/primary=true
	I0729 11:52:46.897228   70231 ops.go:34] apiserver oom_adj: -16
	I0729 11:52:46.897373   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.398474   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:47.898225   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.397547   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:48.897716   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.398393   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.898110   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:49.680176   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:51.680900   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:53.681105   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:50.397646   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:50.897618   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.398130   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:51.897444   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.398334   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:52.898233   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.397587   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:53.898255   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.397634   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:54.898138   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.182828   69419 pod_ready.go:102] pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace has status "Ready":"False"
	I0729 11:52:56.674072   69419 pod_ready.go:81] duration metric: took 4m0.000131876s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" ...
	E0729 11:52:56.674094   69419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-x4t76" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 11:52:56.674113   69419 pod_ready.go:38] duration metric: took 4m9.054741116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:56.674144   69419 kubeadm.go:597] duration metric: took 4m16.587842765s to restartPrimaryControlPlane
	W0729 11:52:56.674197   69419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 11:52:56.674234   69419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:52:55.398096   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:55.897565   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.397785   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:56.897860   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.397925   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:57.897989   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.397500   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:58.897468   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.398228   70231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:52:59.483894   70231 kubeadm.go:1113] duration metric: took 12.790894124s to wait for elevateKubeSystemPrivileges
	I0729 11:52:59.483924   70231 kubeadm.go:394] duration metric: took 5m10.397319925s to StartCluster
	I0729 11:52:59.483941   70231 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.484019   70231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:52:59.485737   70231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:52:59.486008   70231 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.111 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:52:59.486074   70231 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:52:59.486163   70231 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486195   70231 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754486"
	I0729 11:52:59.486196   70231 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486210   70231 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754486"
	I0729 11:52:59.486238   70231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754486"
	I0729 11:52:59.486251   70231 config.go:182] Loaded profile config "default-k8s-diff-port-754486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:52:59.486256   70231 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.486266   70231 addons.go:243] addon metrics-server should already be in state true
	W0729 11:52:59.486205   70231 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:52:59.486295   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486307   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.486550   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486555   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486572   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486573   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.486617   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.486644   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.487888   70231 out.go:177] * Verifying Kubernetes components...
	I0729 11:52:59.489501   70231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:52:59.502095   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0729 11:52:59.502614   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.502832   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0729 11:52:59.503207   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503229   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.503252   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.503805   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.503829   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.504128   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504216   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.504317   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.504801   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.504847   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.505348   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I0729 11:52:59.505701   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.506318   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.506342   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.506738   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.507261   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.507290   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.508065   70231 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754486"
	W0729 11:52:59.508084   70231 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:52:59.508111   70231 host.go:66] Checking if "default-k8s-diff-port-754486" exists ...
	I0729 11:52:59.508423   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.508462   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.526240   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0729 11:52:59.526269   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0729 11:52:59.526313   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0729 11:52:59.526654   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526763   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.526826   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.527214   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527230   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527351   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527388   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527405   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.527429   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.527668   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527715   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.527901   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.527931   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.528030   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.528913   70231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:52:59.528940   70231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:52:59.529836   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.530004   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.532077   70231 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:52:59.532101   70231 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:52:59.533597   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:52:59.533619   70231 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:52:59.533641   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.533645   70231 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:52:59.533659   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:52:59.533681   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.538047   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538082   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538654   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538669   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.538679   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538686   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538693   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.538864   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.538889   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539050   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539065   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.539239   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.539237   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.539374   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.546505   70231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0729 11:52:59.546918   70231 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:52:59.547428   70231 main.go:141] libmachine: Using API Version  1
	I0729 11:52:59.547455   70231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:52:59.547790   70231 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:52:59.548011   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetState
	I0729 11:52:59.549607   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .DriverName
	I0729 11:52:59.549899   70231 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.549915   70231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:52:59.549934   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHHostname
	I0729 11:52:59.553591   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555220   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:06:44", ip: ""} in network mk-default-k8s-diff-port-754486: {Iface:virbr3 ExpiryTime:2024-07-29 12:40:07 +0000 UTC Type:0 Mac:52:54:00:c1:06:44 Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:default-k8s-diff-port-754486 Clientid:01:52:54:00:c1:06:44}
	I0729 11:52:59.555251   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | domain default-k8s-diff-port-754486 has defined IP address 192.168.50.111 and MAC address 52:54:00:c1:06:44 in network mk-default-k8s-diff-port-754486
	I0729 11:52:59.555457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHPort
	I0729 11:52:59.555814   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHKeyPath
	I0729 11:52:59.556005   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .GetSSHUsername
	I0729 11:52:59.556154   70231 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/default-k8s-diff-port-754486/id_rsa Username:docker}
	I0729 11:52:59.758973   70231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:52:59.809677   70231 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818208   70231 node_ready.go:49] node "default-k8s-diff-port-754486" has status "Ready":"True"
	I0729 11:52:59.818252   70231 node_ready.go:38] duration metric: took 8.523612ms for node "default-k8s-diff-port-754486" to be "Ready" ...
	I0729 11:52:59.818264   70231 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:52:59.825340   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:52:59.935053   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:52:59.954324   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:52:59.954346   70231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:52:59.962991   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:00.052728   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:00.052754   70231 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:00.168588   70231 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.168620   70231 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:52:58.067350   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:52:58.067472   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:52:58.067690   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:00.230134   70231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:00.485028   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485062   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485424   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485447   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.485461   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.485457   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485470   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.485708   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:00.485716   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.485731   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:00.502040   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:00.502061   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:00.502386   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:00.502410   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.400774   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.437744399s)
	I0729 11:53:01.400842   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.400856   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401229   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401248   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.401284   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.401378   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.401387   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.401637   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.401648   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408496   70231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.178316081s)
	I0729 11:53:01.408558   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408577   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.408859   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.408879   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.408859   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.408904   70231 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:01.408917   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) Calling .Close
	I0729 11:53:01.409181   70231 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:01.409218   70231 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:01.409232   70231 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754486"
	I0729 11:53:01.409254   70231 main.go:141] libmachine: (default-k8s-diff-port-754486) DBG | Closing plugin on server side
	I0729 11:53:01.411682   70231 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 11:53:01.413048   70231 addons.go:510] duration metric: took 1.926975712s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 11:53:01.831515   70231 pod_ready.go:102] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:02.331492   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.331518   70231 pod_ready.go:81] duration metric: took 2.506145957s for pod "coredns-7db6d8ff4d-4zl6p" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.331530   70231 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341152   70231 pod_ready.go:92] pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.341175   70231 pod_ready.go:81] duration metric: took 9.638268ms for pod "coredns-7db6d8ff4d-fbcqh" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.341183   70231 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346946   70231 pod_ready.go:92] pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.346971   70231 pod_ready.go:81] duration metric: took 5.77844ms for pod "etcd-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.346981   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351401   70231 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.351423   70231 pod_ready.go:81] duration metric: took 4.432109ms for pod "kube-apiserver-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.351435   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355410   70231 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.355428   70231 pod_ready.go:81] duration metric: took 3.986166ms for pod "kube-controller-manager-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.355439   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729604   70231 pod_ready.go:92] pod "kube-proxy-7gkd8" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:02.729634   70231 pod_ready.go:81] duration metric: took 374.188296ms for pod "kube-proxy-7gkd8" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:02.729653   70231 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130027   70231 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:03.130052   70231 pod_ready.go:81] duration metric: took 400.392433ms for pod "kube-scheduler-default-k8s-diff-port-754486" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:03.130061   70231 pod_ready.go:38] duration metric: took 3.311785643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:03.130077   70231 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:03.130134   70231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:03.152134   70231 api_server.go:72] duration metric: took 3.666086394s to wait for apiserver process to appear ...
	I0729 11:53:03.152164   70231 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:03.152188   70231 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8444/healthz ...
	I0729 11:53:03.157357   70231 api_server.go:279] https://192.168.50.111:8444/healthz returned 200:
	ok
	I0729 11:53:03.158235   70231 api_server.go:141] control plane version: v1.30.3
	I0729 11:53:03.158254   70231 api_server.go:131] duration metric: took 6.083486ms to wait for apiserver health ...
	I0729 11:53:03.158261   70231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:03.333517   70231 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:03.333547   70231 system_pods.go:61] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.333552   70231 system_pods.go:61] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.333556   70231 system_pods.go:61] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.333559   70231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.333563   70231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.333566   70231 system_pods.go:61] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.333568   70231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.333574   70231 system_pods.go:61] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.333577   70231 system_pods.go:61] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.333586   70231 system_pods.go:74] duration metric: took 175.319992ms to wait for pod list to return data ...
	I0729 11:53:03.333595   70231 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:03.529964   70231 default_sa.go:45] found service account: "default"
	I0729 11:53:03.529989   70231 default_sa.go:55] duration metric: took 196.388041ms for default service account to be created ...
	I0729 11:53:03.529998   70231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:03.733015   70231 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:03.733051   70231 system_pods.go:89] "coredns-7db6d8ff4d-4zl6p" [1182fef3-3604-44e8-b428-97677c9b1e72] Running
	I0729 11:53:03.733058   70231 system_pods.go:89] "coredns-7db6d8ff4d-fbcqh" [0e2834a2-a70d-4770-9f11-679f711a0207] Running
	I0729 11:53:03.733062   70231 system_pods.go:89] "etcd-default-k8s-diff-port-754486" [25e86e0e-ca7b-4033-8c3c-c34254c1bdbb] Running
	I0729 11:53:03.733066   70231 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754486" [5f6f4190-cf37-4eac-adce-9d128d7e3d24] Running
	I0729 11:53:03.733070   70231 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754486" [29d73e09-493a-42b6-a30d-2ae5ddc84b7e] Running
	I0729 11:53:03.733075   70231 system_pods.go:89] "kube-proxy-7gkd8" [6699fd97-db3a-4ad9-911e-637b6401ba46] Running
	I0729 11:53:03.733081   70231 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754486" [a98b8c70-a7e4-4ca2-ac9b-0055199bad61] Running
	I0729 11:53:03.733090   70231 system_pods.go:89] "metrics-server-569cc877fc-rgzfc" [cc8f9151-b09f-4a1d-95bc-2e271bbf24e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:03.733097   70231 system_pods.go:89] "storage-provisioner" [7d5b7866-e0f0-4f25-a9d9-0eba38db9e76] Running
	I0729 11:53:03.733108   70231 system_pods.go:126] duration metric: took 203.104097ms to wait for k8s-apps to be running ...
	I0729 11:53:03.733121   70231 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:03.733165   70231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:03.749014   70231 system_svc.go:56] duration metric: took 15.886799ms WaitForService to wait for kubelet
	I0729 11:53:03.749045   70231 kubeadm.go:582] duration metric: took 4.263001752s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:03.749070   70231 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:03.930356   70231 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:03.930380   70231 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:03.930390   70231 node_conditions.go:105] duration metric: took 181.31486ms to run NodePressure ...
	I0729 11:53:03.930399   70231 start.go:241] waiting for startup goroutines ...
	I0729 11:53:03.930406   70231 start.go:246] waiting for cluster config update ...
	I0729 11:53:03.930417   70231 start.go:255] writing updated cluster config ...
	I0729 11:53:03.930690   70231 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:03.984862   70231 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:53:03.986829   70231 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754486" cluster and "default" namespace by default
	I0729 11:53:03.068218   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:03.068464   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:13.068777   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:13.069011   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:23.088658   69419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.414400207s)
	I0729 11:53:23.088743   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:23.104735   69419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:53:23.115145   69419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:53:23.125890   69419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:53:23.125913   69419 kubeadm.go:157] found existing configuration files:
	
	I0729 11:53:23.125969   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:53:23.136854   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:53:23.136914   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:53:23.148400   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:53:23.157595   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:53:23.157670   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:53:23.167281   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.177119   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:53:23.177176   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:53:23.187359   69419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:53:23.197033   69419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:53:23.197110   69419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:53:23.207490   69419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:53:23.254112   69419 kubeadm.go:310] W0729 11:53:23.231768    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.254983   69419 kubeadm.go:310] W0729 11:53:23.232599    2924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 11:53:23.383993   69419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:53:32.410305   69419 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 11:53:32.410378   69419 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:53:32.410483   69419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:53:32.410611   69419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:53:32.410758   69419 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 11:53:32.410840   69419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:53:32.412547   69419 out.go:204]   - Generating certificates and keys ...
	I0729 11:53:32.412651   69419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:53:32.412761   69419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:53:32.412879   69419 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:53:32.412973   69419 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:53:32.413101   69419 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:53:32.413176   69419 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:53:32.413228   69419 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:53:32.413279   69419 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:53:32.413346   69419 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:53:32.413427   69419 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:53:32.413482   69419 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:53:32.413577   69419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:53:32.413644   69419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:53:32.413717   69419 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:53:32.413795   69419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:53:32.413880   69419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:53:32.413970   69419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:53:32.414075   69419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:53:32.414167   69419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:53:32.415701   69419 out.go:204]   - Booting up control plane ...
	I0729 11:53:32.415817   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:53:32.415927   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:53:32.416034   69419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:53:32.416205   69419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:53:32.416312   69419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:53:32.416350   69419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:53:32.416466   69419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:53:32.416564   69419 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:53:32.416658   69419 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.786281ms
	I0729 11:53:32.416730   69419 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:53:32.416803   69419 kubeadm.go:310] [api-check] The API server is healthy after 5.501546935s
	I0729 11:53:32.416941   69419 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:53:32.417099   69419 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:53:32.417184   69419 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:53:32.417349   69419 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-297799 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:53:32.417434   69419 kubeadm.go:310] [bootstrap-token] Using token: 9fg92x.rq4eihzyqcflv0gj
	I0729 11:53:32.418783   69419 out.go:204]   - Configuring RBAC rules ...
	I0729 11:53:32.418899   69419 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:53:32.418969   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:53:32.419100   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:53:32.419239   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:53:32.419337   69419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:53:32.419423   69419 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:53:32.419544   69419 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:53:32.419594   69419 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:53:32.419633   69419 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:53:32.419639   69419 kubeadm.go:310] 
	I0729 11:53:32.419686   69419 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:53:32.419695   69419 kubeadm.go:310] 
	I0729 11:53:32.419756   69419 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:53:32.419762   69419 kubeadm.go:310] 
	I0729 11:53:32.419802   69419 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:53:32.419858   69419 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:53:32.419901   69419 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:53:32.419911   69419 kubeadm.go:310] 
	I0729 11:53:32.419965   69419 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:53:32.419971   69419 kubeadm.go:310] 
	I0729 11:53:32.420017   69419 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:53:32.420025   69419 kubeadm.go:310] 
	I0729 11:53:32.420072   69419 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:53:32.420137   69419 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:53:32.420200   69419 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:53:32.420205   69419 kubeadm.go:310] 
	I0729 11:53:32.420277   69419 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:53:32.420340   69419 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:53:32.420345   69419 kubeadm.go:310] 
	I0729 11:53:32.420416   69419 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420506   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f \
	I0729 11:53:32.420531   69419 kubeadm.go:310] 	--control-plane 
	I0729 11:53:32.420544   69419 kubeadm.go:310] 
	I0729 11:53:32.420645   69419 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:53:32.420654   69419 kubeadm.go:310] 
	I0729 11:53:32.420765   69419 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9fg92x.rq4eihzyqcflv0gj \
	I0729 11:53:32.420895   69419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:847bb472089d412d6d48ccaa16f49dea2a379fbd4305758c487aa88508cb582f 
	I0729 11:53:32.420911   69419 cni.go:84] Creating CNI manager for ""
	I0729 11:53:32.420920   69419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 11:53:32.422438   69419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 11:53:32.423731   69419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 11:53:32.435581   69419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 11:53:32.457560   69419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:53:32.457665   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:32.457719   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-297799 minikube.k8s.io/updated_at=2024_07_29T11_53_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=no-preload-297799 minikube.k8s.io/primary=true
	I0729 11:53:32.486072   69419 ops.go:34] apiserver oom_adj: -16
	I0729 11:53:32.674003   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.174011   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.674077   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:34.174383   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:33.069886   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:53:33.070112   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:53:34.674510   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.174124   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:35.674135   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.174420   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.674370   69419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:53:36.787916   69419 kubeadm.go:1113] duration metric: took 4.330303492s to wait for elevateKubeSystemPrivileges
	I0729 11:53:36.787961   69419 kubeadm.go:394] duration metric: took 4m56.766239734s to StartCluster
	I0729 11:53:36.787983   69419 settings.go:142] acquiring lock: {Name:mk4073409bb15821bbdc83fc5608de3180c48daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.788071   69419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:53:36.790440   69419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/kubeconfig: {Name:mk94a6c1852bfa568269a1f84be0375e8d322f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:53:36.790747   69419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:53:36.790823   69419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:53:36.790914   69419 addons.go:69] Setting storage-provisioner=true in profile "no-preload-297799"
	I0729 11:53:36.790929   69419 addons.go:69] Setting default-storageclass=true in profile "no-preload-297799"
	I0729 11:53:36.790946   69419 addons.go:234] Setting addon storage-provisioner=true in "no-preload-297799"
	W0729 11:53:36.790956   69419 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:53:36.790970   69419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-297799"
	I0729 11:53:36.790963   69419 addons.go:69] Setting metrics-server=true in profile "no-preload-297799"
	I0729 11:53:36.790990   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791009   69419 addons.go:234] Setting addon metrics-server=true in "no-preload-297799"
	W0729 11:53:36.791023   69419 addons.go:243] addon metrics-server should already be in state true
	I0729 11:53:36.790938   69419 config.go:182] Loaded profile config "no-preload-297799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 11:53:36.791055   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.791315   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791350   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791376   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791395   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.791424   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.791403   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.792400   69419 out.go:177] * Verifying Kubernetes components...
	I0729 11:53:36.793837   69419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:53:36.807811   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0729 11:53:36.807845   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0729 11:53:36.808292   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808347   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.808844   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808863   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.808971   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.808992   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.809204   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809364   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.809708   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809727   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.809868   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.809903   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.810196   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33633
	I0729 11:53:36.810602   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.811069   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.811085   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.811578   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.811789   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.815254   69419 addons.go:234] Setting addon default-storageclass=true in "no-preload-297799"
	W0729 11:53:36.815319   69419 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:53:36.815351   69419 host.go:66] Checking if "no-preload-297799" exists ...
	I0729 11:53:36.815722   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.815767   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.826661   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I0729 11:53:36.827259   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.827925   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.827947   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.828288   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.828475   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.829152   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0729 11:53:36.829483   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.829942   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.829954   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.830335   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.830448   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.830512   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.831779   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0729 11:53:36.832366   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.832499   69419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:53:36.832831   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.832843   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.833105   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.833659   69419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:53:36.833692   69419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:53:36.834047   69419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:36.834218   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:53:36.834243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.835105   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.837003   69419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:53:36.837668   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838105   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.838130   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.838304   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:53:36.838322   69419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:53:36.838340   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.838347   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.838505   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.838661   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.838834   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.841306   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841724   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.841742   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.841909   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.842081   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.842243   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.842405   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:36.853959   69419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0729 11:53:36.854349   69419 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:53:36.854825   69419 main.go:141] libmachine: Using API Version  1
	I0729 11:53:36.854849   69419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:53:36.855184   69419 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:53:36.855412   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetState
	I0729 11:53:36.857073   69419 main.go:141] libmachine: (no-preload-297799) Calling .DriverName
	I0729 11:53:36.857352   69419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:36.857363   69419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:53:36.857377   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHHostname
	I0729 11:53:36.860376   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860804   69419 main.go:141] libmachine: (no-preload-297799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:e4", ip: ""} in network mk-no-preload-297799: {Iface:virbr1 ExpiryTime:2024-07-29 12:38:49 +0000 UTC Type:0 Mac:52:54:00:4c:20:e4 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-297799 Clientid:01:52:54:00:4c:20:e4}
	I0729 11:53:36.860826   69419 main.go:141] libmachine: (no-preload-297799) DBG | domain no-preload-297799 has defined IP address 192.168.39.120 and MAC address 52:54:00:4c:20:e4 in network mk-no-preload-297799
	I0729 11:53:36.860973   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHPort
	I0729 11:53:36.861121   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHKeyPath
	I0729 11:53:36.861249   69419 main.go:141] libmachine: (no-preload-297799) Calling .GetSSHUsername
	I0729 11:53:36.861352   69419 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/no-preload-297799/id_rsa Username:docker}
	I0729 11:53:37.000840   69419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:53:37.058535   69419 node_ready.go:35] waiting up to 6m0s for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069231   69419 node_ready.go:49] node "no-preload-297799" has status "Ready":"True"
	I0729 11:53:37.069260   69419 node_ready.go:38] duration metric: took 10.69136ms for node "no-preload-297799" to be "Ready" ...
	I0729 11:53:37.069272   69419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:37.080726   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:37.122837   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:53:37.154216   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:53:37.177797   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:53:37.177821   69419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:53:37.298520   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:53:37.298546   69419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:53:37.410911   69419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:37.410935   69419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:53:37.502799   69419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:53:38.337421   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.214547185s)
	I0729 11:53:38.337457   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.183203433s)
	I0729 11:53:38.337490   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337491   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337500   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337506   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337775   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337790   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337800   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337807   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.337843   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.337844   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.337865   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.337873   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.337880   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.338007   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338016   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338091   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.338102   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.338108   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.417894   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.417921   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.418225   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.418250   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.418272   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642279   69419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139432943s)
	I0729 11:53:38.642328   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642343   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642656   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642677   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642680   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.642687   69419 main.go:141] libmachine: Making call to close driver server
	I0729 11:53:38.642712   69419 main.go:141] libmachine: (no-preload-297799) Calling .Close
	I0729 11:53:38.642956   69419 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:53:38.642975   69419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:53:38.642985   69419 addons.go:475] Verifying addon metrics-server=true in "no-preload-297799"
	I0729 11:53:38.642990   69419 main.go:141] libmachine: (no-preload-297799) DBG | Closing plugin on server side
	I0729 11:53:38.644958   69419 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 11:53:38.646417   69419 addons.go:510] duration metric: took 1.855596723s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 11:53:39.091531   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:41.587827   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.088096   69419 pod_ready.go:102] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"False"
	I0729 11:53:44.586486   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.586510   69419 pod_ready.go:81] duration metric: took 7.505759998s for pod "coredns-5cfdc65f69-7n6s7" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.586521   69419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591372   69419 pod_ready.go:92] pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.591394   69419 pod_ready.go:81] duration metric: took 4.865716ms for pod "coredns-5cfdc65f69-bnqrr" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.591404   69419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596377   69419 pod_ready.go:92] pod "etcd-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.596401   69419 pod_ready.go:81] duration metric: took 4.988985ms for pod "etcd-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.596412   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603151   69419 pod_ready.go:92] pod "kube-apiserver-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.603176   69419 pod_ready.go:81] duration metric: took 6.75609ms for pod "kube-apiserver-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.603187   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609494   69419 pod_ready.go:92] pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.609514   69419 pod_ready.go:81] duration metric: took 6.319727ms for pod "kube-controller-manager-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.609526   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984476   69419 pod_ready.go:92] pod "kube-proxy-blx4g" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:44.984505   69419 pod_ready.go:81] duration metric: took 374.971379ms for pod "kube-proxy-blx4g" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:44.984517   69419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385763   69419 pod_ready.go:92] pod "kube-scheduler-no-preload-297799" in "kube-system" namespace has status "Ready":"True"
	I0729 11:53:45.385792   69419 pod_ready.go:81] duration metric: took 401.266749ms for pod "kube-scheduler-no-preload-297799" in "kube-system" namespace to be "Ready" ...
	I0729 11:53:45.385802   69419 pod_ready.go:38] duration metric: took 8.316518469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:53:45.385821   69419 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:53:45.385887   69419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:53:45.404065   69419 api_server.go:72] duration metric: took 8.613282557s to wait for apiserver process to appear ...
	I0729 11:53:45.404093   69419 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:53:45.404114   69419 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 11:53:45.408027   69419 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 11:53:45.408985   69419 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 11:53:45.409011   69419 api_server.go:131] duration metric: took 4.91124ms to wait for apiserver health ...
	I0729 11:53:45.409020   69419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:53:45.587520   69419 system_pods.go:59] 9 kube-system pods found
	I0729 11:53:45.587552   69419 system_pods.go:61] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.587556   69419 system_pods.go:61] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.587560   69419 system_pods.go:61] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.587563   69419 system_pods.go:61] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.587568   69419 system_pods.go:61] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.587571   69419 system_pods.go:61] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.587574   69419 system_pods.go:61] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.587580   69419 system_pods.go:61] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.587584   69419 system_pods.go:61] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.587590   69419 system_pods.go:74] duration metric: took 178.563924ms to wait for pod list to return data ...
	I0729 11:53:45.587596   69419 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:53:45.784611   69419 default_sa.go:45] found service account: "default"
	I0729 11:53:45.784640   69419 default_sa.go:55] duration metric: took 197.037896ms for default service account to be created ...
	I0729 11:53:45.784659   69419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:53:45.992937   69419 system_pods.go:86] 9 kube-system pods found
	I0729 11:53:45.992973   69419 system_pods.go:89] "coredns-5cfdc65f69-7n6s7" [4e8e4916-ee1d-47ce-902b-7c6328514ca9] Running
	I0729 11:53:45.992982   69419 system_pods.go:89] "coredns-5cfdc65f69-bnqrr" [d2258a90-8b49-4cd3-9e84-6e3567ede3f3] Running
	I0729 11:53:45.992990   69419 system_pods.go:89] "etcd-no-preload-297799" [39b3e930-4750-4341-b4bf-175fbe2854ed] Running
	I0729 11:53:45.992996   69419 system_pods.go:89] "kube-apiserver-no-preload-297799" [e7441134-1910-449a-b0f2-85c78362229c] Running
	I0729 11:53:45.993003   69419 system_pods.go:89] "kube-controller-manager-no-preload-297799" [1f21966a-fe9b-4342-8722-a2b699a23d58] Running
	I0729 11:53:45.993010   69419 system_pods.go:89] "kube-proxy-blx4g" [892d6ac2-66bd-4af0-9bca-7018e1d51c1b] Running
	I0729 11:53:45.993017   69419 system_pods.go:89] "kube-scheduler-no-preload-297799" [142f6743-0be3-4ac4-b6fa-ab6b624284e2] Running
	I0729 11:53:45.993027   69419 system_pods.go:89] "metrics-server-78fcd8795b-vxjvd" [8b3c7ae7-d7bc-4216-96ba-b1e1640d94dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 11:53:45.993037   69419 system_pods.go:89] "storage-provisioner" [4afce5e3-3bcf-476d-9846-c57e98532d24] Running
	I0729 11:53:45.993047   69419 system_pods.go:126] duration metric: took 208.382518ms to wait for k8s-apps to be running ...
	I0729 11:53:45.993059   69419 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:53:45.993109   69419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:53:46.012248   69419 system_svc.go:56] duration metric: took 19.180103ms WaitForService to wait for kubelet
	I0729 11:53:46.012284   69419 kubeadm.go:582] duration metric: took 9.221504322s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:53:46.012309   69419 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:53:46.186674   69419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:53:46.186723   69419 node_conditions.go:123] node cpu capacity is 2
	I0729 11:53:46.186736   69419 node_conditions.go:105] duration metric: took 174.422508ms to run NodePressure ...
	I0729 11:53:46.186747   69419 start.go:241] waiting for startup goroutines ...
	I0729 11:53:46.186753   69419 start.go:246] waiting for cluster config update ...
	I0729 11:53:46.186763   69419 start.go:255] writing updated cluster config ...
	I0729 11:53:46.187032   69419 ssh_runner.go:195] Run: rm -f paused
	I0729 11:53:46.236558   69419 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 11:53:46.239388   69419 out.go:177] * Done! kubectl is now configured to use "no-preload-297799" cluster and "default" namespace by default
	I0729 11:54:13.072567   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:13.072841   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:54:13.072861   70480 kubeadm.go:310] 
	I0729 11:54:13.072916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:54:13.072951   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:54:13.072959   70480 kubeadm.go:310] 
	I0729 11:54:13.072987   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:54:13.073016   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:54:13.073156   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:54:13.073181   70480 kubeadm.go:310] 
	I0729 11:54:13.073289   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:54:13.073328   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:54:13.073365   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:54:13.073373   70480 kubeadm.go:310] 
	I0729 11:54:13.073475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:54:13.073562   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:54:13.073572   70480 kubeadm.go:310] 
	I0729 11:54:13.073704   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:54:13.073835   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:54:13.073941   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:54:13.074115   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:54:13.074132   70480 kubeadm.go:310] 
	I0729 11:54:13.074287   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:54:13.074407   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:54:13.074523   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 11:54:13.074665   70480 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 11:54:13.074737   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 11:54:13.546511   70480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:54:13.564193   70480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:54:13.576259   70480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:54:13.576282   70480 kubeadm.go:157] found existing configuration files:
	
	I0729 11:54:13.576325   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:54:13.586785   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:54:13.586846   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:54:13.597376   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:54:13.607259   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:54:13.607330   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:54:13.619236   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.630278   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:54:13.630341   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:54:13.640574   70480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:54:13.650526   70480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:54:13.650584   70480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:54:13.660941   70480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:54:13.742767   70480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 11:54:13.742852   70480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:54:13.911296   70480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:54:13.911467   70480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:54:13.911607   70480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:54:14.123645   70480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:54:14.126464   70480 out.go:204]   - Generating certificates and keys ...
	I0729 11:54:14.126931   70480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:54:14.126995   70480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:54:14.127067   70480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 11:54:14.127146   70480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 11:54:14.127238   70480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 11:54:14.127328   70480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 11:54:14.127424   70480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 11:54:14.127506   70480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 11:54:14.129559   70480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 11:54:14.130588   70480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 11:54:14.130792   70480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 11:54:14.130931   70480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:54:14.373822   70480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:54:14.518174   70480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:54:14.850815   70480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:54:14.955918   70480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:54:14.979731   70480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:54:14.979884   70480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:54:14.979950   70480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:54:15.134480   70480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:54:15.137488   70480 out.go:204]   - Booting up control plane ...
	I0729 11:54:15.137635   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:54:15.150435   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:54:15.151842   70480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:54:15.152652   70480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:54:15.155022   70480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 11:54:55.157641   70480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 11:54:55.157771   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:54:55.158065   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:00.158857   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:00.159153   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:10.159857   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:55:30.160336   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:55:30.160554   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159649   70480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 11:56:10.159858   70480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 11:56:10.159872   70480 kubeadm.go:310] 
	I0729 11:56:10.159916   70480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 11:56:10.159968   70480 kubeadm.go:310] 		timed out waiting for the condition
	I0729 11:56:10.159976   70480 kubeadm.go:310] 
	I0729 11:56:10.160004   70480 kubeadm.go:310] 	This error is likely caused by:
	I0729 11:56:10.160034   70480 kubeadm.go:310] 		- The kubelet is not running
	I0729 11:56:10.160140   70480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 11:56:10.160160   70480 kubeadm.go:310] 
	I0729 11:56:10.160286   70480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 11:56:10.160348   70480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 11:56:10.160378   70480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 11:56:10.160384   70480 kubeadm.go:310] 
	I0729 11:56:10.160475   70480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 11:56:10.160547   70480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 11:56:10.160556   70480 kubeadm.go:310] 
	I0729 11:56:10.160652   70480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 11:56:10.160756   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 11:56:10.160878   70480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 11:56:10.160976   70480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 11:56:10.160985   70480 kubeadm.go:310] 
	I0729 11:56:10.162126   70480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:56:10.162224   70480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 11:56:10.162399   70480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 11:56:10.162415   70480 kubeadm.go:394] duration metric: took 7m59.00013135s to StartCluster
	I0729 11:56:10.162473   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:56:10.162606   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:56:10.208183   70480 cri.go:89] found id: ""
	I0729 11:56:10.208206   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.208214   70480 logs.go:278] No container was found matching "kube-apiserver"
	I0729 11:56:10.208219   70480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 11:56:10.208275   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:56:10.256265   70480 cri.go:89] found id: ""
	I0729 11:56:10.256293   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.256304   70480 logs.go:278] No container was found matching "etcd"
	I0729 11:56:10.256445   70480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 11:56:10.256515   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:56:10.296625   70480 cri.go:89] found id: ""
	I0729 11:56:10.296668   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.296688   70480 logs.go:278] No container was found matching "coredns"
	I0729 11:56:10.296704   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:56:10.296791   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:56:10.333771   70480 cri.go:89] found id: ""
	I0729 11:56:10.333797   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.333824   70480 logs.go:278] No container was found matching "kube-scheduler"
	I0729 11:56:10.333830   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:56:10.333887   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:56:10.371746   70480 cri.go:89] found id: ""
	I0729 11:56:10.371772   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.371782   70480 logs.go:278] No container was found matching "kube-proxy"
	I0729 11:56:10.371789   70480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:56:10.371850   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:56:10.408988   70480 cri.go:89] found id: ""
	I0729 11:56:10.409018   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.409028   70480 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 11:56:10.409036   70480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 11:56:10.409087   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:56:10.448706   70480 cri.go:89] found id: ""
	I0729 11:56:10.448731   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.448740   70480 logs.go:278] No container was found matching "kindnet"
	I0729 11:56:10.448749   70480 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:56:10.448808   70480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:56:10.485549   70480 cri.go:89] found id: ""
	I0729 11:56:10.485577   70480 logs.go:276] 0 containers: []
	W0729 11:56:10.485585   70480 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 11:56:10.485594   70480 logs.go:123] Gathering logs for kubelet ...
	I0729 11:56:10.485609   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:56:10.536953   70480 logs.go:123] Gathering logs for dmesg ...
	I0729 11:56:10.536989   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:56:10.554870   70480 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:56:10.554908   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 11:56:10.675935   70480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 11:56:10.675966   70480 logs.go:123] Gathering logs for CRI-O ...
	I0729 11:56:10.675983   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 11:56:10.792820   70480 logs.go:123] Gathering logs for container status ...
	I0729 11:56:10.792858   70480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 11:56:10.833493   70480 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 11:56:10.833537   70480 out.go:239] * 
	W0729 11:56:10.833616   70480 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.833644   70480 out.go:239] * 
	W0729 11:56:10.834607   70480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:56:10.838465   70480 out.go:177] 
	W0729 11:56:10.840213   70480 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 11:56:10.840257   70480 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 11:56:10.840282   70480 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 11:56:10.841869   70480 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.142139195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254866142113605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1d66bc3-3189-46ac-aaef-9424f0dd8f1c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.142769047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebdcd377-8fa4-4f15-9072-7342d338a52a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.142836826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebdcd377-8fa4-4f15-9072-7342d338a52a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.142871059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ebdcd377-8fa4-4f15-9072-7342d338a52a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.182130225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5dfc113-4f78-4b03-a050-a4de9676be59 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.182258512Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5dfc113-4f78-4b03-a050-a4de9676be59 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.183832570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33573f6e-2880-463c-9133-8961d4e8cf20 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.184343021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254866184312958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33573f6e-2880-463c-9133-8961d4e8cf20 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.185789953Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bb8ce69-d99b-472e-beda-29597ef1c83e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.185857924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bb8ce69-d99b-472e-beda-29597ef1c83e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.185899674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4bb8ce69-d99b-472e-beda-29597ef1c83e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.229769213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22b61876-3973-4809-9603-5b6cfc03bfc6 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.229844154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22b61876-3973-4809-9603-5b6cfc03bfc6 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.231091247Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e220e48c-448f-4183-8ee4-187a9c2085f9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.231481234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254866231456779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e220e48c-448f-4183-8ee4-187a9c2085f9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.232184214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d194b56-9eda-455c-8d30-130f960c4ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.232240852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d194b56-9eda-455c-8d30-130f960c4ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.232271352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0d194b56-9eda-455c-8d30-130f960c4ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.265637949Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0f86371-0cbc-46af-916d-e6742de40e83 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.265738282Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0f86371-0cbc-46af-916d-e6742de40e83 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.267065395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f23ad928-4e52-498c-a2e5-b7db0cb383a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.267472402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254866267452108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f23ad928-4e52-498c-a2e5-b7db0cb383a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.267947828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd754dc5-a8f9-4044-98e1-8e96cc90f892 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.268105959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd754dc5-a8f9-4044-98e1-8e96cc90f892 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:46 old-k8s-version-188043 crio[643]: time="2024-07-29 12:07:46.268164354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dd754dc5-a8f9-4044-98e1-8e96cc90f892 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051118] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040668] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.021934] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.587255] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.658853] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul29 11:48] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.065705] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.081948] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.207768] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.125104] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.281042] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +6.791991] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.065131] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.421882] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[ +12.167503] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 11:52] systemd-fstab-generator[5012]: Ignoring "noauto" option for root device
	[Jul29 11:54] systemd-fstab-generator[5288]: Ignoring "noauto" option for root device
	[  +0.063650] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:07:46 up 20 min,  0 users,  load average: 0.09, 0.05, 0.02
	Linux old-k8s-version-188043 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]: net.(*Resolver).lookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]:         /usr/local/go/src/net/lookup.go:293 +0xb9
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000b63180, 0xc000c44e40, 0x23, 0xc000ce0d40)
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]: created by internal/singleflight.(*Group).DoChan
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]: goroutine 161 [syscall]:
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]: net._C2func_getaddrinfo(0xc000cdd400, 0x0, 0xc000c530e0, 0xc0001221d8, 0x0, 0x0, 0x0)
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]:         _cgo_gotypes.go:94 +0x55
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]: net.cgoLookupIPCNAME.func1(0xc000cdd400, 0x20, 0x20, 0xc000c530e0, 0xc0001221d8, 0x0, 0xc000609ea0, 0x57a492)
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000c44e10, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]: net.cgoIPLookup(0xc000d5f1a0, 0x48ab5d6, 0x3, 0xc000c44e10, 0x1f)
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]: created by net.cgoLookupIP
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6795]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jul 29 12:07:41 old-k8s-version-188043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 140.
	Jul 29 12:07:41 old-k8s-version-188043 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 12:07:41 old-k8s-version-188043 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6805]: I0729 12:07:41.905769    6805 server.go:416] Version: v1.20.0
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6805]: I0729 12:07:41.906066    6805 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6805]: I0729 12:07:41.907801    6805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6805]: W0729 12:07:41.908628    6805 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 12:07:41 old-k8s-version-188043 kubelet[6805]: I0729 12:07:41.909144    6805 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188043 -n old-k8s-version-188043
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 2 (243.312778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-188043" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (149.85s)

                                                
                                    

Test pass (250/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 55
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 15.97
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 51.68
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 110.64
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 151.11
40 TestAddons/serial/GCPAuth/Namespaces 0.14
42 TestAddons/parallel/Registry 16.21
44 TestAddons/parallel/InspektorGadget 11.79
46 TestAddons/parallel/HelmTiller 12.41
48 TestAddons/parallel/CSI 73.48
49 TestAddons/parallel/Headlamp 14.51
50 TestAddons/parallel/CloudSpanner 6.76
51 TestAddons/parallel/LocalPath 12.23
52 TestAddons/parallel/NvidiaDevicePlugin 6.76
53 TestAddons/parallel/Yakd 11.99
55 TestCertOptions 58.03
56 TestCertExpiration 355.13
58 TestForceSystemdFlag 70.23
59 TestForceSystemdEnv 47.03
61 TestKVMDriverInstallOrUpdate 4.4
65 TestErrorSpam/setup 39.97
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.55
69 TestErrorSpam/unpause 1.56
70 TestErrorSpam/stop 4.34
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 95.25
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 40.53
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.05
82 TestFunctional/serial/CacheCmd/cache/add_local 2.19
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 34.34
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.45
93 TestFunctional/serial/LogsFileCmd 1.41
94 TestFunctional/serial/InvalidService 4.37
96 TestFunctional/parallel/ConfigCmd 0.33
97 TestFunctional/parallel/DashboardCmd 19.55
98 TestFunctional/parallel/DryRun 0.28
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 0.98
104 TestFunctional/parallel/ServiceCmdConnect 22.79
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 46.55
108 TestFunctional/parallel/SSHCmd 0.38
109 TestFunctional/parallel/CpCmd 1.46
110 TestFunctional/parallel/MySQL 26.63
111 TestFunctional/parallel/FileSync 0.23
112 TestFunctional/parallel/CertSync 1.38
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
120 TestFunctional/parallel/License 0.6
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
124 TestFunctional/parallel/ServiceCmd/DeployApp 10.23
134 TestFunctional/parallel/ServiceCmd/List 0.53
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
137 TestFunctional/parallel/ServiceCmd/Format 0.36
138 TestFunctional/parallel/ServiceCmd/URL 0.32
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
140 TestFunctional/parallel/ProfileCmd/profile_list 0.38
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
142 TestFunctional/parallel/MountCmd/any-port 16.66
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
147 TestFunctional/parallel/ImageCommands/ImageBuild 5.79
148 TestFunctional/parallel/ImageCommands/Setup 1.96
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.43
150 TestFunctional/parallel/Version/short 0.05
151 TestFunctional/parallel/Version/components 0.56
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.81
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
155 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
156 TestFunctional/parallel/MountCmd/specific-port 1.71
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.1
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.41
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.02
166 TestMultiControlPlane/serial/StartCluster 244.4
167 TestMultiControlPlane/serial/DeployApp 6.71
168 TestMultiControlPlane/serial/PingHostFromPods 1.34
169 TestMultiControlPlane/serial/AddWorkerNode 55.85
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
172 TestMultiControlPlane/serial/CopyFile 12.87
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.48
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
181 TestMultiControlPlane/serial/RestartCluster 282.55
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 81.57
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
188 TestJSONOutput/start/Command 96.54
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.75
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.61
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.35
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.2
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 91.02
220 TestMountStart/serial/StartWithMountFirst 27.99
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 25.16
223 TestMountStart/serial/VerifyMountSecond 0.36
224 TestMountStart/serial/DeleteFirst 0.7
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 22.95
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 126.36
232 TestMultiNode/serial/DeployApp2Nodes 5.39
233 TestMultiNode/serial/PingHostFrom2Pods 0.77
234 TestMultiNode/serial/AddNode 54.3
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 7.09
238 TestMultiNode/serial/StopNode 2.27
239 TestMultiNode/serial/StartAfterStop 40.31
241 TestMultiNode/serial/DeleteNode 2.45
243 TestMultiNode/serial/RestartMultiNode 179.94
244 TestMultiNode/serial/ValidateNameConflict 45.21
251 TestScheduledStopUnix 111.45
255 TestRunningBinaryUpgrade 118.84
266 TestNetworkPlugins/group/false 2.83
270 TestStoppedBinaryUpgrade/Setup 2.65
271 TestStoppedBinaryUpgrade/Upgrade 122.4
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
275 TestNoKubernetes/serial/StartWithK8s 47.64
276 TestNoKubernetes/serial/StartWithStopK8s 7
277 TestNoKubernetes/serial/Start 26.89
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
279 TestNoKubernetes/serial/ProfileList 19.25
281 TestPause/serial/Start 69.83
282 TestNoKubernetes/serial/Stop 1.38
283 TestNoKubernetes/serial/StartNoArgs 51.58
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
293 TestNetworkPlugins/group/auto/Start 87.53
294 TestNetworkPlugins/group/kindnet/Start 106.13
295 TestNetworkPlugins/group/calico/Start 137.47
296 TestNetworkPlugins/group/custom-flannel/Start 100.73
297 TestNetworkPlugins/group/auto/KubeletFlags 0.21
298 TestNetworkPlugins/group/auto/NetCatPod 10.26
299 TestNetworkPlugins/group/auto/DNS 0.16
300 TestNetworkPlugins/group/auto/Localhost 0.15
301 TestNetworkPlugins/group/auto/HairPin 0.12
302 TestNetworkPlugins/group/enable-default-cni/Start 112.35
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
305 TestNetworkPlugins/group/kindnet/NetCatPod 12.32
306 TestNetworkPlugins/group/kindnet/DNS 0.19
307 TestNetworkPlugins/group/kindnet/Localhost 0.17
308 TestNetworkPlugins/group/kindnet/HairPin 0.15
309 TestNetworkPlugins/group/flannel/Start 81.8
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.21
312 TestNetworkPlugins/group/calico/NetCatPod 11.23
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
315 TestNetworkPlugins/group/custom-flannel/DNS 0.19
316 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
317 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
318 TestNetworkPlugins/group/calico/DNS 0.2
319 TestNetworkPlugins/group/calico/Localhost 0.15
320 TestNetworkPlugins/group/calico/HairPin 0.14
321 TestNetworkPlugins/group/bridge/Start 101.41
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
326 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
327 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
328 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestStartStop/group/no-preload/serial/FirstStart 77.32
332 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
333 TestNetworkPlugins/group/flannel/NetCatPod 11.24
334 TestNetworkPlugins/group/flannel/DNS 0.21
335 TestNetworkPlugins/group/flannel/Localhost 0.22
336 TestNetworkPlugins/group/flannel/HairPin 0.16
338 TestStartStop/group/embed-certs/serial/FirstStart 103.14
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
340 TestNetworkPlugins/group/bridge/NetCatPod 12.28
341 TestNetworkPlugins/group/bridge/DNS 0.21
342 TestNetworkPlugins/group/bridge/Localhost 0.17
343 TestNetworkPlugins/group/bridge/HairPin 0.14
344 TestStartStop/group/no-preload/serial/DeployApp 9.36
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 95.69
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
349 TestStartStop/group/embed-certs/serial/DeployApp 9.31
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
356 TestStartStop/group/no-preload/serial/SecondStart 682.09
360 TestStartStop/group/embed-certs/serial/SecondStart 564.15
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 534.13
363 TestStartStop/group/old-k8s-version/serial/Stop 4.28
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
375 TestStartStop/group/newest-cni/serial/FirstStart 47.63
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
378 TestStartStop/group/newest-cni/serial/Stop 7.38
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
380 TestStartStop/group/newest-cni/serial/SecondStart 37.25
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
384 TestStartStop/group/newest-cni/serial/Pause 2.57
x
+
TestDownloadOnly/v1.20.0/json-events (55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-550847 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-550847 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (54.995427498s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (55.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-550847
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-550847: exit status 85 (56.593121ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-550847 | jenkins | v1.33.1 | 29 Jul 24 10:20 UTC |          |
	|         | -p download-only-550847        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:20:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:20:21.988404   11075 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:20:21.988511   11075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:20:21.988518   11075 out.go:304] Setting ErrFile to fd 2...
	I0729 10:20:21.988522   11075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:20:21.988691   11075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	W0729 10:20:21.988799   11075 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19337-3845/.minikube/config/config.json: open /home/jenkins/minikube-integration/19337-3845/.minikube/config/config.json: no such file or directory
	I0729 10:20:21.989376   11075 out.go:298] Setting JSON to true
	I0729 10:20:21.990213   11075 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":168,"bootTime":1722248254,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:20:21.990271   11075 start.go:139] virtualization: kvm guest
	I0729 10:20:21.992627   11075 out.go:97] [download-only-550847] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0729 10:20:21.992727   11075 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 10:20:21.992761   11075 notify.go:220] Checking for updates...
	I0729 10:20:21.994267   11075 out.go:169] MINIKUBE_LOCATION=19337
	I0729 10:20:21.995730   11075 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:20:21.997155   11075 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:20:21.998554   11075 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:20:21.999908   11075 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 10:20:22.002547   11075 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:20:22.002861   11075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:20:22.108781   11075 out.go:97] Using the kvm2 driver based on user configuration
	I0729 10:20:22.108808   11075 start.go:297] selected driver: kvm2
	I0729 10:20:22.108815   11075 start.go:901] validating driver "kvm2" against <nil>
	I0729 10:20:22.109301   11075 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:20:22.109437   11075 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:20:22.125824   11075 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:20:22.125869   11075 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:20:22.126325   11075 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 10:20:22.126476   11075 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:20:22.126532   11075 cni.go:84] Creating CNI manager for ""
	I0729 10:20:22.126545   11075 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:20:22.126552   11075 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:20:22.126597   11075 start.go:340] cluster config:
	{Name:download-only-550847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-550847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:20:22.126817   11075 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:20:22.128871   11075 out.go:97] Downloading VM boot image ...
	I0729 10:20:22.128908   11075 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:20:34.545680   11075 out.go:97] Starting "download-only-550847" primary control-plane node in "download-only-550847" cluster
	I0729 10:20:34.545698   11075 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 10:20:34.658158   11075 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 10:20:34.658203   11075 cache.go:56] Caching tarball of preloaded images
	I0729 10:20:34.658394   11075 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 10:20:34.660273   11075 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 10:20:34.660289   11075 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:20:34.774559   11075 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 10:20:48.199585   11075 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:20:48.199718   11075 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:20:49.108888   11075 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 10:20:49.109248   11075 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/download-only-550847/config.json ...
	I0729 10:20:49.109284   11075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/download-only-550847/config.json: {Name:mk3bfca3b5b12b35452538d022063e5a502f469e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:20:49.109473   11075 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 10:20:49.109710   11075 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-550847 host does not exist
	  To start a cluster, run: "minikube start -p download-only-550847"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-550847
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (15.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-120370 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-120370 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.972846125s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (15.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-120370
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-120370: exit status 85 (56.3952ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-550847 | jenkins | v1.33.1 | 29 Jul 24 10:20 UTC |                     |
	|         | -p download-only-550847        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:21 UTC | 29 Jul 24 10:21 UTC |
	| delete  | -p download-only-550847        | download-only-550847 | jenkins | v1.33.1 | 29 Jul 24 10:21 UTC | 29 Jul 24 10:21 UTC |
	| start   | -o=json --download-only        | download-only-120370 | jenkins | v1.33.1 | 29 Jul 24 10:21 UTC |                     |
	|         | -p download-only-120370        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:21:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:21:17.290683   11841 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:21:17.290831   11841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:21:17.290842   11841 out.go:304] Setting ErrFile to fd 2...
	I0729 10:21:17.290848   11841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:21:17.291051   11841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:21:17.291588   11841 out.go:298] Setting JSON to true
	I0729 10:21:17.292422   11841 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":223,"bootTime":1722248254,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:21:17.292479   11841 start.go:139] virtualization: kvm guest
	I0729 10:21:17.294345   11841 out.go:97] [download-only-120370] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:21:17.294458   11841 notify.go:220] Checking for updates...
	I0729 10:21:17.295847   11841 out.go:169] MINIKUBE_LOCATION=19337
	I0729 10:21:17.297237   11841 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:21:17.298528   11841 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:21:17.299865   11841 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:21:17.301067   11841 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 10:21:17.303302   11841 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:21:17.303571   11841 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:21:17.334729   11841 out.go:97] Using the kvm2 driver based on user configuration
	I0729 10:21:17.334758   11841 start.go:297] selected driver: kvm2
	I0729 10:21:17.334764   11841 start.go:901] validating driver "kvm2" against <nil>
	I0729 10:21:17.335102   11841 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:21:17.335187   11841 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:21:17.350015   11841 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:21:17.350070   11841 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:21:17.350537   11841 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 10:21:17.350666   11841 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:21:17.350828   11841 cni.go:84] Creating CNI manager for ""
	I0729 10:21:17.350860   11841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:21:17.350872   11841 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:21:17.350951   11841 start.go:340] cluster config:
	{Name:download-only-120370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-120370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:21:17.351055   11841 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:21:17.352867   11841 out.go:97] Starting "download-only-120370" primary control-plane node in "download-only-120370" cluster
	I0729 10:21:17.352894   11841 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:21:17.944230   11841 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:21:17.944262   11841 cache.go:56] Caching tarball of preloaded images
	I0729 10:21:17.944419   11841 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:21:17.946152   11841 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 10:21:17.946165   11841 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:21:18.054753   11841 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:21:31.634366   11841 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:21:31.634448   11841 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-120370 host does not exist
	  To start a cluster, run: "minikube start -p download-only-120370"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-120370
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (51.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-876146 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-876146 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (51.675999834s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (51.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-876146
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-876146: exit status 85 (56.834436ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-550847 | jenkins | v1.33.1 | 29 Jul 24 10:20 UTC |                     |
	|         | -p download-only-550847             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:21 UTC | 29 Jul 24 10:21 UTC |
	| delete  | -p download-only-550847             | download-only-550847 | jenkins | v1.33.1 | 29 Jul 24 10:21 UTC | 29 Jul 24 10:21 UTC |
	| start   | -o=json --download-only             | download-only-120370 | jenkins | v1.33.1 | 29 Jul 24 10:21 UTC |                     |
	|         | -p download-only-120370             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:21 UTC | 29 Jul 24 10:21 UTC |
	| delete  | -p download-only-120370             | download-only-120370 | jenkins | v1.33.1 | 29 Jul 24 10:21 UTC | 29 Jul 24 10:21 UTC |
	| start   | -o=json --download-only             | download-only-876146 | jenkins | v1.33.1 | 29 Jul 24 10:21 UTC |                     |
	|         | -p download-only-876146             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:21:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:21:33.570235   12060 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:21:33.570459   12060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:21:33.570467   12060 out.go:304] Setting ErrFile to fd 2...
	I0729 10:21:33.570470   12060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:21:33.570672   12060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:21:33.571251   12060 out.go:298] Setting JSON to true
	I0729 10:21:33.572078   12060 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":240,"bootTime":1722248254,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:21:33.572136   12060 start.go:139] virtualization: kvm guest
	I0729 10:21:33.574566   12060 out.go:97] [download-only-876146] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:21:33.574733   12060 notify.go:220] Checking for updates...
	I0729 10:21:33.576169   12060 out.go:169] MINIKUBE_LOCATION=19337
	I0729 10:21:33.577519   12060 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:21:33.579208   12060 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:21:33.580627   12060 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:21:33.582122   12060 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 10:21:33.584595   12060 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:21:33.584820   12060 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:21:33.616233   12060 out.go:97] Using the kvm2 driver based on user configuration
	I0729 10:21:33.616258   12060 start.go:297] selected driver: kvm2
	I0729 10:21:33.616264   12060 start.go:901] validating driver "kvm2" against <nil>
	I0729 10:21:33.616563   12060 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:21:33.616641   12060 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19337-3845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:21:33.631096   12060 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:21:33.631138   12060 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:21:33.631586   12060 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 10:21:33.631714   12060 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:21:33.631763   12060 cni.go:84] Creating CNI manager for ""
	I0729 10:21:33.631775   12060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:21:33.631784   12060 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:21:33.631840   12060 start.go:340] cluster config:
	{Name:download-only-876146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-876146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:21:33.631952   12060 iso.go:125] acquiring lock: {Name:mkfc7ba7ca67cc52aa65b4f99222d1f891c3ec47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:21:33.633693   12060 out.go:97] Starting "download-only-876146" primary control-plane node in "download-only-876146" cluster
	I0729 10:21:33.633720   12060 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 10:21:33.794337   12060 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 10:21:33.794379   12060 cache.go:56] Caching tarball of preloaded images
	I0729 10:21:33.794556   12060 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 10:21:33.796533   12060 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 10:21:33.796553   12060 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:21:33.906000   12060 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 10:21:45.372507   12060 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:21:45.372594   12060 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19337-3845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:21:46.113361   12060 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 10:21:46.113677   12060 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/download-only-876146/config.json ...
	I0729 10:21:46.113708   12060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/download-only-876146/config.json: {Name:mkf795bddf48b5e712daa183ad95206587683eba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:21:46.113882   12060 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 10:21:46.114014   12060 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19337-3845/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-876146 host does not exist
	  To start a cluster, run: "minikube start -p download-only-876146"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-876146
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-960068 --alsologtostderr --binary-mirror http://127.0.0.1:33367 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-960068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-960068
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (110.64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-290694 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-290694 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m49.667683126s)
helpers_test.go:175: Cleaning up "offline-crio-290694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-290694
--- PASS: TestOffline (110.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-342031
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-342031: exit status 85 (45.545866ms)

                                                
                                                
-- stdout --
	* Profile "addons-342031" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-342031"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-342031
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-342031: exit status 85 (45.045694ms)

                                                
                                                
-- stdout --
	* Profile "addons-342031" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-342031"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (151.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-342031 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-342031 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.107948002s)
--- PASS: TestAddons/Setup (151.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-342031 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-342031 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.188317ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-t9mch" [c7896ca2-19fe-4e63-acf0-f820d1e54537] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005922289s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vvvpt" [4854d6ef-fcb6-430d-aa34-fba27a2e4685] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007539386s
addons_test.go:342: (dbg) Run:  kubectl --context addons-342031 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-342031 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-342031 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.238904556s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8sxnw" [95e1677c-251d-4425-8ec5-397145638728] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.007434875s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-342031
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-342031: (5.784761711s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.41s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.461022ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-j4zgl" [622a71ad-23e4-4ae3-bdce-fccd9e31b58c] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005072698s
addons_test.go:475: (dbg) Run:  kubectl --context addons-342031 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-342031 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.802282189s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (73.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.714527ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-342031 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/07/29 10:25:32 [DEBUG] GET http://192.168.39.224:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-342031 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [35d6c07f-5fd1-4406-8461-71c84144d2f4] Pending
helpers_test.go:344: "task-pv-pod" [35d6c07f-5fd1-4406-8461-71c84144d2f4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [35d6c07f-5fd1-4406-8461-71c84144d2f4] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004550286s
addons_test.go:590: (dbg) Run:  kubectl --context addons-342031 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-342031 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-342031 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-342031 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-342031 delete pod task-pv-pod: (1.029839573s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-342031 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-342031 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-342031 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e0952c7b-e6b1-445f-a31c-7054fad90546] Pending
helpers_test.go:344: "task-pv-pod-restore" [e0952c7b-e6b1-445f-a31c-7054fad90546] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e0952c7b-e6b1-445f-a31c-7054fad90546] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003635503s
addons_test.go:632: (dbg) Run:  kubectl --context addons-342031 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-342031 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-342031 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-342031 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.763698502s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (73.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-342031 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-342031 --alsologtostderr -v=1: (1.194170938s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-vd7js" [7d782200-b333-4563-9bc5-012f43886495] Pending
helpers_test.go:344: "headlamp-7867546754-vd7js" [7d782200-b333-4563-9bc5-012f43886495] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-vd7js" [7d782200-b333-4563-9bc5-012f43886495] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.016280846s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (14.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-8vjw5" [8bc3c7f9-e79c-40cd-a880-0ac0fe81a402] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004696694s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-342031
--- PASS: TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-342031 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-342031 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342031 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9d1f75ff-7974-486e-bb3f-21a089f294ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9d1f75ff-7974-486e-bb3f-21a089f294ea] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9d1f75ff-7974-486e-bb3f-21a089f294ea] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003699115s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-342031 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 ssh "cat /opt/local-path-provisioner/pvc-48e69630-5ff6-45b0-be49-8c195291cc40_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-342031 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-342031 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hn9w7" [4ec41c4d-a5b9-4145-965a-16a2cc121387] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006127934s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-342031
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-8lrlv" [a14c3dde-3982-41d1-ae9f-dc9d5663fde6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004677679s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-342031 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-342031 addons disable yakd --alsologtostderr -v=1: (5.984198701s)
--- PASS: TestAddons/parallel/Yakd (11.99s)

                                                
                                    
x
+
TestCertOptions (58.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-224523 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0729 11:29:57.915633   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-224523 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (56.626948872s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-224523 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-224523 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-224523 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-224523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-224523
--- PASS: TestCertOptions (58.03s)

                                                
                                    
x
+
TestCertExpiration (355.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-338366 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-338366 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m50.407135844s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-338366 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-338366 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m3.626677631s)
helpers_test.go:175: Cleaning up "cert-expiration-338366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-338366
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-338366: (1.095903641s)
--- PASS: TestCertExpiration (355.13s)

                                                
                                    
x
+
TestForceSystemdFlag (70.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-371697 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-371697 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.058262055s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-371697 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-371697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-371697
--- PASS: TestForceSystemdFlag (70.23s)

                                                
                                    
x
+
TestForceSystemdEnv (47.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-802488 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-802488 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.231316231s)
helpers_test.go:175: Cleaning up "force-systemd-env-802488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-802488
--- PASS: TestForceSystemdEnv (47.03s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.4s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.40s)

                                                
                                    
x
+
TestErrorSpam/setup (39.97s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-963143 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-963143 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-963143 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-963143 --driver=kvm2  --container-runtime=crio: (39.967323889s)
--- PASS: TestErrorSpam/setup (39.97s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (4.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 stop: (1.604291433s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 stop: (1.478347728s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-963143 --log_dir /tmp/nospam-963143 stop: (1.257034161s)
--- PASS: TestErrorSpam/stop (4.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19337-3845/.minikube/files/etc/test/nested/copy/11064/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (95.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-503222 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0729 10:34:57.916053   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:34:57.921828   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:34:57.932149   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:34:57.952441   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:34:57.992745   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:34:58.073070   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:34:58.233518   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:34:58.554130   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:34:59.195062   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:35:00.475585   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:35:03.036206   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:35:08.156360   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:35:18.397194   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:35:38.878249   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:36:19.839758   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-503222 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m35.252209125s)
--- PASS: TestFunctional/serial/StartWithProxy (95.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-503222 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-503222 --alsologtostderr -v=8: (40.532522116s)
functional_test.go:659: soft start took 40.533344531s for "functional-503222" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-503222 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-503222 cache add registry.k8s.io/pause:3.3: (1.076164313s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-503222 cache add registry.k8s.io/pause:latest: (1.007871144s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-503222 /tmp/TestFunctionalserialCacheCmdcacheadd_local1388341642/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 cache add minikube-local-cache-test:functional-503222
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-503222 cache add minikube-local-cache-test:functional-503222: (1.857189784s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 cache delete minikube-local-cache-test:functional-503222
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-503222
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-503222 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (216.507249ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 kubectl -- --context functional-503222 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-503222 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.34s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-503222 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 10:37:41.763509   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-503222 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.337591324s)
functional_test.go:757: restart took 34.33774214s for "functional-503222" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.34s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-503222 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-503222 logs: (1.450659389s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 logs --file /tmp/TestFunctionalserialLogsFileCmd134513991/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-503222 logs --file /tmp/TestFunctionalserialLogsFileCmd134513991/001/logs.txt: (1.414161859s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-503222 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-503222
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-503222: exit status 115 (272.326692ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.208:32546 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-503222 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-503222 config get cpus: exit status 14 (57.598189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-503222 config get cpus: exit status 14 (52.417045ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-503222 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-503222 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21390: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-503222 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-503222 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.701078ms)

                                                
                                                
-- stdout --
	* [functional-503222] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:38:30.528367   21176 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:38:30.528619   21176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:38:30.528631   21176 out.go:304] Setting ErrFile to fd 2...
	I0729 10:38:30.528640   21176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:38:30.528844   21176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:38:30.529400   21176 out.go:298] Setting JSON to false
	I0729 10:38:30.530446   21176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1257,"bootTime":1722248254,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:38:30.530517   21176 start.go:139] virtualization: kvm guest
	I0729 10:38:30.532898   21176 out.go:177] * [functional-503222] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:38:30.534351   21176 notify.go:220] Checking for updates...
	I0729 10:38:30.534357   21176 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:38:30.535939   21176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:38:30.537338   21176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:38:30.538792   21176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:38:30.540171   21176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 10:38:30.541454   21176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:38:30.543123   21176 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:38:30.543499   21176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:38:30.543562   21176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:38:30.561667   21176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0729 10:38:30.562129   21176 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:38:30.562793   21176 main.go:141] libmachine: Using API Version  1
	I0729 10:38:30.562946   21176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:38:30.563368   21176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:38:30.563575   21176 main.go:141] libmachine: (functional-503222) Calling .DriverName
	I0729 10:38:30.563834   21176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:38:30.564130   21176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:38:30.564172   21176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:38:30.580250   21176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0729 10:38:30.580756   21176 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:38:30.581333   21176 main.go:141] libmachine: Using API Version  1
	I0729 10:38:30.581349   21176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:38:30.581665   21176 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:38:30.581892   21176 main.go:141] libmachine: (functional-503222) Calling .DriverName
	I0729 10:38:30.615012   21176 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 10:38:30.616437   21176 start.go:297] selected driver: kvm2
	I0729 10:38:30.616451   21176 start.go:901] validating driver "kvm2" against &{Name:functional-503222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-503222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:38:30.616598   21176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:38:30.618721   21176 out.go:177] 
	W0729 10:38:30.620044   21176 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 10:38:30.621462   21176 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-503222 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-503222 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-503222 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (130.165602ms)

                                                
                                                
-- stdout --
	* [functional-503222] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:38:30.804962   21254 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:38:30.805090   21254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:38:30.805101   21254 out.go:304] Setting ErrFile to fd 2...
	I0729 10:38:30.805105   21254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:38:30.805376   21254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 10:38:30.805903   21254 out.go:298] Setting JSON to false
	I0729 10:38:30.806842   21254 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1257,"bootTime":1722248254,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:38:30.806904   21254 start.go:139] virtualization: kvm guest
	I0729 10:38:30.808897   21254 out.go:177] * [functional-503222] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 10:38:30.810979   21254 notify.go:220] Checking for updates...
	I0729 10:38:30.810993   21254 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:38:30.812547   21254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:38:30.814052   21254 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 10:38:30.815493   21254 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 10:38:30.816898   21254 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 10:38:30.818252   21254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:38:30.819861   21254 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:38:30.820266   21254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:38:30.820309   21254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:38:30.835057   21254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0729 10:38:30.835512   21254 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:38:30.836139   21254 main.go:141] libmachine: Using API Version  1
	I0729 10:38:30.836163   21254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:38:30.836576   21254 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:38:30.836784   21254 main.go:141] libmachine: (functional-503222) Calling .DriverName
	I0729 10:38:30.837028   21254 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:38:30.837308   21254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:38:30.837356   21254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:38:30.851720   21254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I0729 10:38:30.852127   21254 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:38:30.852573   21254 main.go:141] libmachine: Using API Version  1
	I0729 10:38:30.852593   21254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:38:30.852956   21254 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:38:30.853129   21254 main.go:141] libmachine: (functional-503222) Calling .DriverName
	I0729 10:38:30.885672   21254 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 10:38:30.886927   21254 start.go:297] selected driver: kvm2
	I0729 10:38:30.886938   21254 start.go:901] validating driver "kvm2" against &{Name:functional-503222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-503222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:38:30.887041   21254 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:38:30.888855   21254 out.go:177] 
	W0729 10:38:30.890187   21254 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 10:38:30.891663   21254 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-503222 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-503222 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-sjhj7" [a6289d71-aa48-4ecb-85ce-7844e1a7e388] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-sjhj7" [a6289d71-aa48-4ecb-85ce-7844e1a7e388] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.003388798s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.208:30816
functional_test.go:1671: http://192.168.39.208:30816: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-sjhj7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.208:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.208:30816
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b3dcf17a-fc9e-440e-adbb-8b9a98502675] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004398601s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-503222 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-503222 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-503222 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-503222 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d6735528-dc84-49f1-87e2-3e4e08181ccb] Pending
helpers_test.go:344: "sp-pod" [d6735528-dc84-49f1-87e2-3e4e08181ccb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d6735528-dc84-49f1-87e2-3e4e08181ccb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.004262253s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-503222 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-503222 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-503222 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [00b37e4e-acc5-431d-8faf-8d73316f70b2] Pending
helpers_test.go:344: "sp-pod" [00b37e4e-acc5-431d-8faf-8d73316f70b2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [00b37e4e-acc5-431d-8faf-8d73316f70b2] Running
2024/07/29 10:38:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004792752s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-503222 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.55s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh -n functional-503222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 cp functional-503222:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2272063794/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh -n functional-503222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh -n functional-503222 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-503222 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-p6l6z" [66d597de-bcd8-4f6c-b027-e6d21b1686d4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-p6l6z" [66d597de-bcd8-4f6c-b027-e6d21b1686d4] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.005929194s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-503222 exec mysql-64454c8b5c-p6l6z -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-503222 exec mysql-64454c8b5c-p6l6z -- mysql -ppassword -e "show databases;": exit status 1 (187.349347ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-503222 exec mysql-64454c8b5c-p6l6z -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-503222 exec mysql-64454c8b5c-p6l6z -- mysql -ppassword -e "show databases;": exit status 1 (170.934052ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-503222 exec mysql-64454c8b5c-p6l6z -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.63s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11064/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo cat /etc/test/nested/copy/11064/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11064.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo cat /etc/ssl/certs/11064.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11064.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo cat /usr/share/ca-certificates/11064.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/110642.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo cat /etc/ssl/certs/110642.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/110642.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo cat /usr/share/ca-certificates/110642.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-503222 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-503222 ssh "sudo systemctl is-active docker": exit status 1 (244.441728ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-503222 ssh "sudo systemctl is-active containerd": exit status 1 (255.336455ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-503222 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-503222 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-lcndd" [29617d35-6270-4abe-984c-70fb84ec5a38] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-lcndd" [29617d35-6270-4abe-984c-70fb84ec5a38] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004131996s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 service list -o json
functional_test.go:1490: Took "526.316399ms" to run "out/minikube-linux-amd64 -p functional-503222 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.208:30966
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.208:30966
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "326.782264ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "48.638045ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "262.697458ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "43.780244ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (16.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdany-port3550655895/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722249497711515210" to /tmp/TestFunctionalparallelMountCmdany-port3550655895/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722249497711515210" to /tmp/TestFunctionalparallelMountCmdany-port3550655895/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722249497711515210" to /tmp/TestFunctionalparallelMountCmdany-port3550655895/001/test-1722249497711515210
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.189897ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 10:38 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 10:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 10:38 test-1722249497711515210
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh cat /mount-9p/test-1722249497711515210
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-503222 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [189520e1-a025-4c22-bfbd-5c89c2a5f032] Pending
helpers_test.go:344: "busybox-mount" [189520e1-a025-4c22-bfbd-5c89c2a5f032] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [189520e1-a025-4c22-bfbd-5c89c2a5f032] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [189520e1-a025-4c22-bfbd-5c89c2a5f032] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.004109132s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-503222 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdany-port3550655895/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (16.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-503222 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-503222
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-503222
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-503222 image ls --format short --alsologtostderr:
I0729 10:38:37.558113   22206 out.go:291] Setting OutFile to fd 1 ...
I0729 10:38:37.558213   22206 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:38:37.558222   22206 out.go:304] Setting ErrFile to fd 2...
I0729 10:38:37.558226   22206 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:38:37.558417   22206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
I0729 10:38:37.558984   22206 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 10:38:37.559086   22206 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 10:38:37.559424   22206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 10:38:37.559462   22206 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 10:38:37.574734   22206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
I0729 10:38:37.575182   22206 main.go:141] libmachine: () Calling .GetVersion
I0729 10:38:37.575767   22206 main.go:141] libmachine: Using API Version  1
I0729 10:38:37.575787   22206 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 10:38:37.576134   22206 main.go:141] libmachine: () Calling .GetMachineName
I0729 10:38:37.576314   22206 main.go:141] libmachine: (functional-503222) Calling .GetState
I0729 10:38:37.577950   22206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 10:38:37.578009   22206 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 10:38:37.594166   22206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42665
I0729 10:38:37.594554   22206 main.go:141] libmachine: () Calling .GetVersion
I0729 10:38:37.595030   22206 main.go:141] libmachine: Using API Version  1
I0729 10:38:37.595048   22206 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 10:38:37.595509   22206 main.go:141] libmachine: () Calling .GetMachineName
I0729 10:38:37.595725   22206 main.go:141] libmachine: (functional-503222) Calling .DriverName
I0729 10:38:37.595938   22206 ssh_runner.go:195] Run: systemctl --version
I0729 10:38:37.595960   22206 main.go:141] libmachine: (functional-503222) Calling .GetSSHHostname
I0729 10:38:37.598662   22206 main.go:141] libmachine: (functional-503222) DBG | domain functional-503222 has defined MAC address 52:54:00:bd:83:76 in network mk-functional-503222
I0729 10:38:37.599059   22206 main.go:141] libmachine: (functional-503222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:83:76", ip: ""} in network mk-functional-503222: {Iface:virbr1 ExpiryTime:2024-07-29 11:35:12 +0000 UTC Type:0 Mac:52:54:00:bd:83:76 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-503222 Clientid:01:52:54:00:bd:83:76}
I0729 10:38:37.599093   22206 main.go:141] libmachine: (functional-503222) DBG | domain functional-503222 has defined IP address 192.168.39.208 and MAC address 52:54:00:bd:83:76 in network mk-functional-503222
I0729 10:38:37.599207   22206 main.go:141] libmachine: (functional-503222) Calling .GetSSHPort
I0729 10:38:37.599368   22206 main.go:141] libmachine: (functional-503222) Calling .GetSSHKeyPath
I0729 10:38:37.599505   22206 main.go:141] libmachine: (functional-503222) Calling .GetSSHUsername
I0729 10:38:37.599614   22206 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/functional-503222/id_rsa Username:docker}
I0729 10:38:37.700350   22206 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 10:38:37.778212   22206 main.go:141] libmachine: Making call to close driver server
I0729 10:38:37.778226   22206 main.go:141] libmachine: (functional-503222) Calling .Close
I0729 10:38:37.778588   22206 main.go:141] libmachine: (functional-503222) DBG | Closing plugin on server side
I0729 10:38:37.778597   22206 main.go:141] libmachine: Successfully made call to close driver server
I0729 10:38:37.778670   22206 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 10:38:37.778686   22206 main.go:141] libmachine: Making call to close driver server
I0729 10:38:37.778696   22206 main.go:141] libmachine: (functional-503222) Calling .Close
I0729 10:38:37.778919   22206 main.go:141] libmachine: Successfully made call to close driver server
I0729 10:38:37.778934   22206 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-503222 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server           | functional-503222  | 9056ab77afb8e | 4.94MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-503222  | 33776f96cac1f | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-503222 image ls --format table --alsologtostderr:
I0729 10:38:38.316884   22331 out.go:291] Setting OutFile to fd 1 ...
I0729 10:38:38.317002   22331 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:38:38.317014   22331 out.go:304] Setting ErrFile to fd 2...
I0729 10:38:38.317021   22331 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:38:38.317211   22331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
I0729 10:38:38.317885   22331 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 10:38:38.318026   22331 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 10:38:38.318516   22331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 10:38:38.318566   22331 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 10:38:38.333391   22331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
I0729 10:38:38.333860   22331 main.go:141] libmachine: () Calling .GetVersion
I0729 10:38:38.334377   22331 main.go:141] libmachine: Using API Version  1
I0729 10:38:38.334397   22331 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 10:38:38.334692   22331 main.go:141] libmachine: () Calling .GetMachineName
I0729 10:38:38.334898   22331 main.go:141] libmachine: (functional-503222) Calling .GetState
I0729 10:38:38.336611   22331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 10:38:38.336658   22331 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 10:38:38.351310   22331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40071
I0729 10:38:38.351705   22331 main.go:141] libmachine: () Calling .GetVersion
I0729 10:38:38.352281   22331 main.go:141] libmachine: Using API Version  1
I0729 10:38:38.352319   22331 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 10:38:38.352659   22331 main.go:141] libmachine: () Calling .GetMachineName
I0729 10:38:38.352881   22331 main.go:141] libmachine: (functional-503222) Calling .DriverName
I0729 10:38:38.353124   22331 ssh_runner.go:195] Run: systemctl --version
I0729 10:38:38.353158   22331 main.go:141] libmachine: (functional-503222) Calling .GetSSHHostname
I0729 10:38:38.356001   22331 main.go:141] libmachine: (functional-503222) DBG | domain functional-503222 has defined MAC address 52:54:00:bd:83:76 in network mk-functional-503222
I0729 10:38:38.356492   22331 main.go:141] libmachine: (functional-503222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:83:76", ip: ""} in network mk-functional-503222: {Iface:virbr1 ExpiryTime:2024-07-29 11:35:12 +0000 UTC Type:0 Mac:52:54:00:bd:83:76 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-503222 Clientid:01:52:54:00:bd:83:76}
I0729 10:38:38.356515   22331 main.go:141] libmachine: (functional-503222) DBG | domain functional-503222 has defined IP address 192.168.39.208 and MAC address 52:54:00:bd:83:76 in network mk-functional-503222
I0729 10:38:38.356668   22331 main.go:141] libmachine: (functional-503222) Calling .GetSSHPort
I0729 10:38:38.356832   22331 main.go:141] libmachine: (functional-503222) Calling .GetSSHKeyPath
I0729 10:38:38.357007   22331 main.go:141] libmachine: (functional-503222) Calling .GetSSHUsername
I0729 10:38:38.357158   22331 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/functional-503222/id_rsa Username:docker}
I0729 10:38:38.439377   22331 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 10:38:38.490746   22331 main.go:141] libmachine: Making call to close driver server
I0729 10:38:38.490760   22331 main.go:141] libmachine: (functional-503222) Calling .Close
I0729 10:38:38.491074   22331 main.go:141] libmachine: Successfully made call to close driver server
I0729 10:38:38.491089   22331 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 10:38:38.491091   22331 main.go:141] libmachine: (functional-503222) DBG | Closing plugin on server side
I0729 10:38:38.491115   22331 main.go:141] libmachine: Making call to close driver server
I0729 10:38:38.491123   22331 main.go:141] libmachine: (functional-503222) Calling .Close
I0729 10:38:38.491392   22331 main.go:141] libmachine: (functional-503222) DBG | Closing plugin on server side
I0729 10:38:38.491403   22331 main.go:141] libmachine: Successfully made call to close driver server
I0729 10:38:38.491415   22331 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-503222 image ls --format json --alsologtostderr:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-503222"],"size":"4943877"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b815
6d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"33776f96cac1f046968066cc08e7326887f125992b5c73d06a5785ec5c956717","repoDigests":["localhost/minikube-local-cache-test@sha256:606b426f0eebcea98c4f4b56c0a54c805f5516df53df1daaf8ef8a3a85e544dd"],"repoTags":["localhost/minikube-local-cache-test:functional-503222"],"size":"3330"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807
e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["reg
istry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-sc
heduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412b
cca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io
/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-503222 image ls --format json --alsologtostderr:
I0729 10:38:38.091009   22277 out.go:291] Setting OutFile to fd 1 ...
I0729 10:38:38.091142   22277 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:38:38.091153   22277 out.go:304] Setting ErrFile to fd 2...
I0729 10:38:38.091157   22277 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:38:38.091312   22277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
I0729 10:38:38.091849   22277 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 10:38:38.091966   22277 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 10:38:38.092429   22277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 10:38:38.092475   22277 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 10:38:38.107459   22277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33411
I0729 10:38:38.108033   22277 main.go:141] libmachine: () Calling .GetVersion
I0729 10:38:38.108607   22277 main.go:141] libmachine: Using API Version  1
I0729 10:38:38.108630   22277 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 10:38:38.109004   22277 main.go:141] libmachine: () Calling .GetMachineName
I0729 10:38:38.109231   22277 main.go:141] libmachine: (functional-503222) Calling .GetState
I0729 10:38:38.111300   22277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 10:38:38.111348   22277 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 10:38:38.125759   22277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40967
I0729 10:38:38.126208   22277 main.go:141] libmachine: () Calling .GetVersion
I0729 10:38:38.126750   22277 main.go:141] libmachine: Using API Version  1
I0729 10:38:38.126769   22277 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 10:38:38.127090   22277 main.go:141] libmachine: () Calling .GetMachineName
I0729 10:38:38.127283   22277 main.go:141] libmachine: (functional-503222) Calling .DriverName
I0729 10:38:38.127466   22277 ssh_runner.go:195] Run: systemctl --version
I0729 10:38:38.127500   22277 main.go:141] libmachine: (functional-503222) Calling .GetSSHHostname
I0729 10:38:38.130340   22277 main.go:141] libmachine: (functional-503222) DBG | domain functional-503222 has defined MAC address 52:54:00:bd:83:76 in network mk-functional-503222
I0729 10:38:38.130717   22277 main.go:141] libmachine: (functional-503222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:83:76", ip: ""} in network mk-functional-503222: {Iface:virbr1 ExpiryTime:2024-07-29 11:35:12 +0000 UTC Type:0 Mac:52:54:00:bd:83:76 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-503222 Clientid:01:52:54:00:bd:83:76}
I0729 10:38:38.130744   22277 main.go:141] libmachine: (functional-503222) DBG | domain functional-503222 has defined IP address 192.168.39.208 and MAC address 52:54:00:bd:83:76 in network mk-functional-503222
I0729 10:38:38.130866   22277 main.go:141] libmachine: (functional-503222) Calling .GetSSHPort
I0729 10:38:38.131040   22277 main.go:141] libmachine: (functional-503222) Calling .GetSSHKeyPath
I0729 10:38:38.131201   22277 main.go:141] libmachine: (functional-503222) Calling .GetSSHUsername
I0729 10:38:38.131346   22277 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/functional-503222/id_rsa Username:docker}
I0729 10:38:38.218315   22277 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 10:38:38.261711   22277 main.go:141] libmachine: Making call to close driver server
I0729 10:38:38.261725   22277 main.go:141] libmachine: (functional-503222) Calling .Close
I0729 10:38:38.262019   22277 main.go:141] libmachine: Successfully made call to close driver server
I0729 10:38:38.262037   22277 main.go:141] libmachine: (functional-503222) DBG | Closing plugin on server side
I0729 10:38:38.262044   22277 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 10:38:38.262064   22277 main.go:141] libmachine: Making call to close driver server
I0729 10:38:38.262073   22277 main.go:141] libmachine: (functional-503222) Calling .Close
I0729 10:38:38.262298   22277 main.go:141] libmachine: Successfully made call to close driver server
I0729 10:38:38.262309   22277 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-503222 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-503222
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 33776f96cac1f046968066cc08e7326887f125992b5c73d06a5785ec5c956717
repoDigests:
- localhost/minikube-local-cache-test@sha256:606b426f0eebcea98c4f4b56c0a54c805f5516df53df1daaf8ef8a3a85e544dd
repoTags:
- localhost/minikube-local-cache-test:functional-503222
size: "3330"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-503222 image ls --format yaml --alsologtostderr:
I0729 10:38:37.825290   22242 out.go:291] Setting OutFile to fd 1 ...
I0729 10:38:37.825404   22242 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:38:37.825486   22242 out.go:304] Setting ErrFile to fd 2...
I0729 10:38:37.825497   22242 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:38:37.825852   22242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
I0729 10:38:37.826454   22242 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 10:38:37.826576   22242 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 10:38:37.827008   22242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 10:38:37.827067   22242 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 10:38:37.843732   22242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41251
I0729 10:38:37.844159   22242 main.go:141] libmachine: () Calling .GetVersion
I0729 10:38:37.844719   22242 main.go:141] libmachine: Using API Version  1
I0729 10:38:37.844739   22242 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 10:38:37.845083   22242 main.go:141] libmachine: () Calling .GetMachineName
I0729 10:38:37.845307   22242 main.go:141] libmachine: (functional-503222) Calling .GetState
I0729 10:38:37.847243   22242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 10:38:37.847284   22242 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 10:38:37.864339   22242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45139
I0729 10:38:37.864756   22242 main.go:141] libmachine: () Calling .GetVersion
I0729 10:38:37.865233   22242 main.go:141] libmachine: Using API Version  1
I0729 10:38:37.865274   22242 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 10:38:37.865635   22242 main.go:141] libmachine: () Calling .GetMachineName
I0729 10:38:37.865828   22242 main.go:141] libmachine: (functional-503222) Calling .DriverName
I0729 10:38:37.866045   22242 ssh_runner.go:195] Run: systemctl --version
I0729 10:38:37.866067   22242 main.go:141] libmachine: (functional-503222) Calling .GetSSHHostname
I0729 10:38:37.869227   22242 main.go:141] libmachine: (functional-503222) DBG | domain functional-503222 has defined MAC address 52:54:00:bd:83:76 in network mk-functional-503222
I0729 10:38:37.869682   22242 main.go:141] libmachine: (functional-503222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:83:76", ip: ""} in network mk-functional-503222: {Iface:virbr1 ExpiryTime:2024-07-29 11:35:12 +0000 UTC Type:0 Mac:52:54:00:bd:83:76 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-503222 Clientid:01:52:54:00:bd:83:76}
I0729 10:38:37.869727   22242 main.go:141] libmachine: (functional-503222) DBG | domain functional-503222 has defined IP address 192.168.39.208 and MAC address 52:54:00:bd:83:76 in network mk-functional-503222
I0729 10:38:37.869925   22242 main.go:141] libmachine: (functional-503222) Calling .GetSSHPort
I0729 10:38:37.870101   22242 main.go:141] libmachine: (functional-503222) Calling .GetSSHKeyPath
I0729 10:38:37.870269   22242 main.go:141] libmachine: (functional-503222) Calling .GetSSHUsername
I0729 10:38:37.870406   22242 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/functional-503222/id_rsa Username:docker}
I0729 10:38:37.978448   22242 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 10:38:38.024156   22242 main.go:141] libmachine: Making call to close driver server
I0729 10:38:38.024168   22242 main.go:141] libmachine: (functional-503222) Calling .Close
I0729 10:38:38.024444   22242 main.go:141] libmachine: Successfully made call to close driver server
I0729 10:38:38.024467   22242 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 10:38:38.024450   22242 main.go:141] libmachine: (functional-503222) DBG | Closing plugin on server side
I0729 10:38:38.024482   22242 main.go:141] libmachine: Making call to close driver server
I0729 10:38:38.024513   22242 main.go:141] libmachine: (functional-503222) Calling .Close
I0729 10:38:38.024766   22242 main.go:141] libmachine: Successfully made call to close driver server
I0729 10:38:38.024780   22242 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-503222 ssh pgrep buildkitd: exit status 1 (193.060839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image build -t localhost/my-image:functional-503222 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-503222 image build -t localhost/my-image:functional-503222 testdata/build --alsologtostderr: (5.38099702s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-503222 image build -t localhost/my-image:functional-503222 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 183d1eecec8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-503222
--> 0159323da48
Successfully tagged localhost/my-image:functional-503222
0159323da48757f9001cd1271e926040852b979e358a89ad61838c8f018dfdec
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-503222 image build -t localhost/my-image:functional-503222 testdata/build --alsologtostderr:
I0729 10:38:38.265113   22319 out.go:291] Setting OutFile to fd 1 ...
I0729 10:38:38.265393   22319 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:38:38.265401   22319 out.go:304] Setting ErrFile to fd 2...
I0729 10:38:38.265407   22319 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:38:38.265633   22319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
I0729 10:38:38.266261   22319 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 10:38:38.267408   22319 config.go:182] Loaded profile config "functional-503222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 10:38:38.267860   22319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 10:38:38.267906   22319 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 10:38:38.288042   22319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40319
I0729 10:38:38.288488   22319 main.go:141] libmachine: () Calling .GetVersion
I0729 10:38:38.289075   22319 main.go:141] libmachine: Using API Version  1
I0729 10:38:38.289112   22319 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 10:38:38.289631   22319 main.go:141] libmachine: () Calling .GetMachineName
I0729 10:38:38.289805   22319 main.go:141] libmachine: (functional-503222) Calling .GetState
I0729 10:38:38.291736   22319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 10:38:38.291768   22319 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 10:38:38.310941   22319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
I0729 10:38:38.311598   22319 main.go:141] libmachine: () Calling .GetVersion
I0729 10:38:38.312098   22319 main.go:141] libmachine: Using API Version  1
I0729 10:38:38.312125   22319 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 10:38:38.312487   22319 main.go:141] libmachine: () Calling .GetMachineName
I0729 10:38:38.312678   22319 main.go:141] libmachine: (functional-503222) Calling .DriverName
I0729 10:38:38.312901   22319 ssh_runner.go:195] Run: systemctl --version
I0729 10:38:38.312939   22319 main.go:141] libmachine: (functional-503222) Calling .GetSSHHostname
I0729 10:38:38.316145   22319 main.go:141] libmachine: (functional-503222) DBG | domain functional-503222 has defined MAC address 52:54:00:bd:83:76 in network mk-functional-503222
I0729 10:38:38.316492   22319 main.go:141] libmachine: (functional-503222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:83:76", ip: ""} in network mk-functional-503222: {Iface:virbr1 ExpiryTime:2024-07-29 11:35:12 +0000 UTC Type:0 Mac:52:54:00:bd:83:76 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:functional-503222 Clientid:01:52:54:00:bd:83:76}
I0729 10:38:38.316510   22319 main.go:141] libmachine: (functional-503222) DBG | domain functional-503222 has defined IP address 192.168.39.208 and MAC address 52:54:00:bd:83:76 in network mk-functional-503222
I0729 10:38:38.316632   22319 main.go:141] libmachine: (functional-503222) Calling .GetSSHPort
I0729 10:38:38.316786   22319 main.go:141] libmachine: (functional-503222) Calling .GetSSHKeyPath
I0729 10:38:38.316911   22319 main.go:141] libmachine: (functional-503222) Calling .GetSSHUsername
I0729 10:38:38.317042   22319 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/functional-503222/id_rsa Username:docker}
I0729 10:38:38.398123   22319 build_images.go:161] Building image from path: /tmp/build.4291894932.tar
I0729 10:38:38.398174   22319 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 10:38:38.409066   22319 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4291894932.tar
I0729 10:38:38.413762   22319 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4291894932.tar: stat -c "%s %y" /var/lib/minikube/build/build.4291894932.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4291894932.tar': No such file or directory
I0729 10:38:38.413796   22319 ssh_runner.go:362] scp /tmp/build.4291894932.tar --> /var/lib/minikube/build/build.4291894932.tar (3072 bytes)
I0729 10:38:38.442737   22319 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4291894932
I0729 10:38:38.455506   22319 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4291894932 -xf /var/lib/minikube/build/build.4291894932.tar
I0729 10:38:38.465410   22319 crio.go:315] Building image: /var/lib/minikube/build/build.4291894932
I0729 10:38:38.465482   22319 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-503222 /var/lib/minikube/build/build.4291894932 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0729 10:38:43.554191   22319 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-503222 /var/lib/minikube/build/build.4291894932 --cgroup-manager=cgroupfs: (5.088682806s)
I0729 10:38:43.554258   22319 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4291894932
I0729 10:38:43.581612   22319 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4291894932.tar
I0729 10:38:43.598722   22319 build_images.go:217] Built localhost/my-image:functional-503222 from /tmp/build.4291894932.tar
I0729 10:38:43.598758   22319 build_images.go:133] succeeded building to: functional-503222
I0729 10:38:43.598763   22319 build_images.go:134] failed building to: 
I0729 10:38:43.598787   22319 main.go:141] libmachine: Making call to close driver server
I0729 10:38:43.598798   22319 main.go:141] libmachine: (functional-503222) Calling .Close
I0729 10:38:43.599088   22319 main.go:141] libmachine: Successfully made call to close driver server
I0729 10:38:43.599108   22319 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 10:38:43.599118   22319 main.go:141] libmachine: Making call to close driver server
I0729 10:38:43.599090   22319 main.go:141] libmachine: (functional-503222) DBG | Closing plugin on server side
I0729 10:38:43.599126   22319 main.go:141] libmachine: (functional-503222) Calling .Close
I0729 10:38:43.599357   22319 main.go:141] libmachine: Successfully made call to close driver server
I0729 10:38:43.599369   22319 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.94287593s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-503222
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image load --daemon docker.io/kicbase/echo-server:functional-503222 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-503222 image load --daemon docker.io/kicbase/echo-server:functional-503222 --alsologtostderr: (1.076867134s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image load --daemon docker.io/kicbase/echo-server:functional-503222 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-503222
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image load --daemon docker.io/kicbase/echo-server:functional-503222 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image save docker.io/kicbase/echo-server:functional-503222 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image rm docker.io/kicbase/echo-server:functional-503222 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdspecific-port1584436226/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (201.052748ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdspecific-port1584436226/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-503222 ssh "sudo umount -f /mount-9p": exit status 1 (234.055765ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-503222 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdspecific-port1584436226/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-503222
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 image save --daemon docker.io/kicbase/echo-server:functional-503222 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-503222 image save --daemon docker.io/kicbase/echo-server:functional-503222 --alsologtostderr: (1.377616456s)
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-503222
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789452202/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789452202/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789452202/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T" /mount1: exit status 1 (259.246175ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-503222 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-503222 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789452202/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789452202/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-503222 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789452202/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-503222
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-503222
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-503222
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (244.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-763049 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 10:39:57.915458   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 10:40:25.604733   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-763049 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m3.726754577s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (244.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-763049 -- rollout status deployment/busybox: (4.561876933s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-6s8vm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-bsjch -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-v8wqv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-6s8vm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-bsjch -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-v8wqv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-6s8vm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-bsjch -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-v8wqv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-6s8vm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0729 10:43:03.511046   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:43:03.516380   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-6s8vm -- sh -c "ping -c 1 192.168.39.1"
E0729 10:43:03.526725   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:43:03.547088   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:43:03.590807   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:43:03.671151   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-bsjch -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0729 10:43:03.831434   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-bsjch -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-v8wqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0729 10:43:04.151933   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-763049 -- exec busybox-fc5497c4f-v8wqv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-763049 -v=7 --alsologtostderr
E0729 10:43:04.792138   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:43:06.072843   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:43:08.633694   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:43:13.754135   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:43:23.994958   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:43:44.475235   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-763049 -v=7 --alsologtostderr: (55.026091788s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-763049 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp testdata/cp-test.txt ha-763049:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile250926512/001/cp-test_ha-763049.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049:/home/docker/cp-test.txt ha-763049-m02:/home/docker/cp-test_ha-763049_ha-763049-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m02 "sudo cat /home/docker/cp-test_ha-763049_ha-763049-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049:/home/docker/cp-test.txt ha-763049-m03:/home/docker/cp-test_ha-763049_ha-763049-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m03 "sudo cat /home/docker/cp-test_ha-763049_ha-763049-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049:/home/docker/cp-test.txt ha-763049-m04:/home/docker/cp-test_ha-763049_ha-763049-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m04 "sudo cat /home/docker/cp-test_ha-763049_ha-763049-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp testdata/cp-test.txt ha-763049-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile250926512/001/cp-test_ha-763049-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m02:/home/docker/cp-test.txt ha-763049:/home/docker/cp-test_ha-763049-m02_ha-763049.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049 "sudo cat /home/docker/cp-test_ha-763049-m02_ha-763049.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m02:/home/docker/cp-test.txt ha-763049-m03:/home/docker/cp-test_ha-763049-m02_ha-763049-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m03 "sudo cat /home/docker/cp-test_ha-763049-m02_ha-763049-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m02:/home/docker/cp-test.txt ha-763049-m04:/home/docker/cp-test_ha-763049-m02_ha-763049-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m04 "sudo cat /home/docker/cp-test_ha-763049-m02_ha-763049-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp testdata/cp-test.txt ha-763049-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile250926512/001/cp-test_ha-763049-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt ha-763049:/home/docker/cp-test_ha-763049-m03_ha-763049.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049 "sudo cat /home/docker/cp-test_ha-763049-m03_ha-763049.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt ha-763049-m02:/home/docker/cp-test_ha-763049-m03_ha-763049-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m02 "sudo cat /home/docker/cp-test_ha-763049-m03_ha-763049-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m03:/home/docker/cp-test.txt ha-763049-m04:/home/docker/cp-test_ha-763049-m03_ha-763049-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m04 "sudo cat /home/docker/cp-test_ha-763049-m03_ha-763049-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp testdata/cp-test.txt ha-763049-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile250926512/001/cp-test_ha-763049-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt ha-763049:/home/docker/cp-test_ha-763049-m04_ha-763049.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049 "sudo cat /home/docker/cp-test_ha-763049-m04_ha-763049.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt ha-763049-m02:/home/docker/cp-test_ha-763049-m04_ha-763049-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m02 "sudo cat /home/docker/cp-test_ha-763049-m04_ha-763049-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 cp ha-763049-m04:/home/docker/cp-test.txt ha-763049-m03:/home/docker/cp-test_ha-763049-m04_ha-763049-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 ssh -n ha-763049-m03 "sudo cat /home/docker/cp-test_ha-763049-m04_ha-763049-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.487844003s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-763049 node delete m03 -v=7 --alsologtostderr: (16.73062484s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (282.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-763049 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 10:58:03.511630   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:59:26.559226   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 10:59:57.915818   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-763049 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m41.803825829s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (282.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-763049 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-763049 --control-plane -v=7 --alsologtostderr: (1m20.739680565s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-763049 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (96.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-560600 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0729 11:03:03.511182   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-560600 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m36.543779002s)
--- PASS: TestJSONOutput/start/Command (96.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-560600 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-560600 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-560600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-560600 --output=json --user=testUser: (7.347032958s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-534032 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-534032 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.258411ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"451b5d8d-ad4d-489b-be2a-70b46ee96f81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-534032] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe053d3e-2b59-4cb1-8817-2111c0e386b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19337"}}
	{"specversion":"1.0","id":"a5c5ebe0-d7f7-47ec-8987-b5bfa741a88a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6cf76113-a6a2-4ce3-a819-7fbfefef3789","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig"}}
	{"specversion":"1.0","id":"f33c7284-e41c-40a9-aa7e-12ef4d9f8bf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube"}}
	{"specversion":"1.0","id":"fc41f853-e4a1-4c1f-ab9e-a2cce9ba5c9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"71913ede-6a93-47f8-84b9-282b3f2e93ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"140e47ed-cfe2-477c-bade-a6a5895b3c45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-534032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-534032
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (91.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-345246 --driver=kvm2  --container-runtime=crio
E0729 11:04:57.915772   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-345246 --driver=kvm2  --container-runtime=crio: (43.812803433s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-348410 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-348410 --driver=kvm2  --container-runtime=crio: (44.551207359s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-345246
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-348410
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-348410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-348410
helpers_test.go:175: Cleaning up "first-345246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-345246
--- PASS: TestMinikubeProfile (91.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-818413 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-818413 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.993256165s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-818413 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-818413 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-832231 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-832231 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.159926992s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-832231 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-832231 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-818413 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-832231 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-832231 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-832231
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-832231: (1.274461993s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-832231
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-832231: (21.954184817s)
--- PASS: TestMountStart/serial/RestartStopped (22.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-832231 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-832231 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-893477 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 11:08:00.968738   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
E0729 11:08:03.511863   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-893477 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m5.956619661s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-893477 -- rollout status deployment/busybox: (3.949364903s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- exec busybox-fc5497c4f-mq79l -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- exec busybox-fc5497c4f-zskmp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- exec busybox-fc5497c4f-mq79l -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- exec busybox-fc5497c4f-zskmp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- exec busybox-fc5497c4f-mq79l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- exec busybox-fc5497c4f-zskmp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- exec busybox-fc5497c4f-mq79l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- exec busybox-fc5497c4f-mq79l -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- exec busybox-fc5497c4f-zskmp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-893477 -- exec busybox-fc5497c4f-zskmp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-893477 -v 3 --alsologtostderr
E0729 11:09:57.915637   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-893477 -v 3 --alsologtostderr: (53.732718747s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-893477 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp testdata/cp-test.txt multinode-893477:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp multinode-893477:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3316282178/001/cp-test_multinode-893477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp multinode-893477:/home/docker/cp-test.txt multinode-893477-m02:/home/docker/cp-test_multinode-893477_multinode-893477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m02 "sudo cat /home/docker/cp-test_multinode-893477_multinode-893477-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp multinode-893477:/home/docker/cp-test.txt multinode-893477-m03:/home/docker/cp-test_multinode-893477_multinode-893477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m03 "sudo cat /home/docker/cp-test_multinode-893477_multinode-893477-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp testdata/cp-test.txt multinode-893477-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp multinode-893477-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3316282178/001/cp-test_multinode-893477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp multinode-893477-m02:/home/docker/cp-test.txt multinode-893477:/home/docker/cp-test_multinode-893477-m02_multinode-893477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477 "sudo cat /home/docker/cp-test_multinode-893477-m02_multinode-893477.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp multinode-893477-m02:/home/docker/cp-test.txt multinode-893477-m03:/home/docker/cp-test_multinode-893477-m02_multinode-893477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m03 "sudo cat /home/docker/cp-test_multinode-893477-m02_multinode-893477-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp testdata/cp-test.txt multinode-893477-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp multinode-893477-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3316282178/001/cp-test_multinode-893477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp multinode-893477-m03:/home/docker/cp-test.txt multinode-893477:/home/docker/cp-test_multinode-893477-m03_multinode-893477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477 "sudo cat /home/docker/cp-test_multinode-893477-m03_multinode-893477.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 cp multinode-893477-m03:/home/docker/cp-test.txt multinode-893477-m02:/home/docker/cp-test_multinode-893477-m03_multinode-893477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 ssh -n multinode-893477-m02 "sudo cat /home/docker/cp-test_multinode-893477-m03_multinode-893477-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-893477 node stop m03: (1.428959172s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-893477 status: exit status 7 (421.585979ms)

                                                
                                                
-- stdout --
	multinode-893477
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-893477-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-893477-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-893477 status --alsologtostderr: exit status 7 (421.43815ms)

                                                
                                                
-- stdout --
	multinode-893477
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-893477-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-893477-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:10:48.077646   39891 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:10:48.077768   39891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:10:48.077778   39891 out.go:304] Setting ErrFile to fd 2...
	I0729 11:10:48.077785   39891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:10:48.077980   39891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:10:48.078161   39891 out.go:298] Setting JSON to false
	I0729 11:10:48.078192   39891 mustload.go:65] Loading cluster: multinode-893477
	I0729 11:10:48.078295   39891 notify.go:220] Checking for updates...
	I0729 11:10:48.078592   39891 config.go:182] Loaded profile config "multinode-893477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:10:48.078607   39891 status.go:255] checking status of multinode-893477 ...
	I0729 11:10:48.078990   39891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:10:48.079065   39891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:10:48.099162   39891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I0729 11:10:48.099670   39891 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:10:48.100189   39891 main.go:141] libmachine: Using API Version  1
	I0729 11:10:48.100210   39891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:10:48.100649   39891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:10:48.100881   39891 main.go:141] libmachine: (multinode-893477) Calling .GetState
	I0729 11:10:48.102572   39891 status.go:330] multinode-893477 host status = "Running" (err=<nil>)
	I0729 11:10:48.102586   39891 host.go:66] Checking if "multinode-893477" exists ...
	I0729 11:10:48.102981   39891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:10:48.103023   39891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:10:48.117907   39891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41331
	I0729 11:10:48.118453   39891 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:10:48.118999   39891 main.go:141] libmachine: Using API Version  1
	I0729 11:10:48.119023   39891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:10:48.119308   39891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:10:48.119500   39891 main.go:141] libmachine: (multinode-893477) Calling .GetIP
	I0729 11:10:48.122525   39891 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:10:48.122985   39891 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:10:48.123023   39891 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:10:48.123131   39891 host.go:66] Checking if "multinode-893477" exists ...
	I0729 11:10:48.123432   39891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:10:48.123471   39891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:10:48.139340   39891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46509
	I0729 11:10:48.139749   39891 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:10:48.140223   39891 main.go:141] libmachine: Using API Version  1
	I0729 11:10:48.140246   39891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:10:48.140574   39891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:10:48.140773   39891 main.go:141] libmachine: (multinode-893477) Calling .DriverName
	I0729 11:10:48.141040   39891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:10:48.141076   39891 main.go:141] libmachine: (multinode-893477) Calling .GetSSHHostname
	I0729 11:10:48.143850   39891 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:10:48.144260   39891 main.go:141] libmachine: (multinode-893477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:6d:2b", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:07:46 +0000 UTC Type:0 Mac:52:54:00:21:6d:2b Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-893477 Clientid:01:52:54:00:21:6d:2b}
	I0729 11:10:48.144289   39891 main.go:141] libmachine: (multinode-893477) DBG | domain multinode-893477 has defined IP address 192.168.39.159 and MAC address 52:54:00:21:6d:2b in network mk-multinode-893477
	I0729 11:10:48.144417   39891 main.go:141] libmachine: (multinode-893477) Calling .GetSSHPort
	I0729 11:10:48.144570   39891 main.go:141] libmachine: (multinode-893477) Calling .GetSSHKeyPath
	I0729 11:10:48.144766   39891 main.go:141] libmachine: (multinode-893477) Calling .GetSSHUsername
	I0729 11:10:48.144888   39891 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/multinode-893477/id_rsa Username:docker}
	I0729 11:10:48.226363   39891 ssh_runner.go:195] Run: systemctl --version
	I0729 11:10:48.232385   39891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:10:48.247676   39891 kubeconfig.go:125] found "multinode-893477" server: "https://192.168.39.159:8443"
	I0729 11:10:48.247702   39891 api_server.go:166] Checking apiserver status ...
	I0729 11:10:48.247738   39891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:10:48.261587   39891 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0729 11:10:48.270612   39891 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:10:48.270663   39891 ssh_runner.go:195] Run: ls
	I0729 11:10:48.275535   39891 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0729 11:10:48.279464   39891 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I0729 11:10:48.279486   39891 status.go:422] multinode-893477 apiserver status = Running (err=<nil>)
	I0729 11:10:48.279498   39891 status.go:257] multinode-893477 status: &{Name:multinode-893477 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:10:48.279517   39891 status.go:255] checking status of multinode-893477-m02 ...
	I0729 11:10:48.279815   39891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:10:48.279854   39891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:10:48.295532   39891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0729 11:10:48.295966   39891 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:10:48.296406   39891 main.go:141] libmachine: Using API Version  1
	I0729 11:10:48.296427   39891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:10:48.296689   39891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:10:48.296869   39891 main.go:141] libmachine: (multinode-893477-m02) Calling .GetState
	I0729 11:10:48.298267   39891 status.go:330] multinode-893477-m02 host status = "Running" (err=<nil>)
	I0729 11:10:48.298284   39891 host.go:66] Checking if "multinode-893477-m02" exists ...
	I0729 11:10:48.298563   39891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:10:48.298602   39891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:10:48.313922   39891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I0729 11:10:48.314291   39891 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:10:48.314794   39891 main.go:141] libmachine: Using API Version  1
	I0729 11:10:48.314819   39891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:10:48.315144   39891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:10:48.315342   39891 main.go:141] libmachine: (multinode-893477-m02) Calling .GetIP
	I0729 11:10:48.317905   39891 main.go:141] libmachine: (multinode-893477-m02) DBG | domain multinode-893477-m02 has defined MAC address 52:54:00:68:96:94 in network mk-multinode-893477
	I0729 11:10:48.318307   39891 main.go:141] libmachine: (multinode-893477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:96:94", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:08:58 +0000 UTC Type:0 Mac:52:54:00:68:96:94 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-893477-m02 Clientid:01:52:54:00:68:96:94}
	I0729 11:10:48.318326   39891 main.go:141] libmachine: (multinode-893477-m02) DBG | domain multinode-893477-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:68:96:94 in network mk-multinode-893477
	I0729 11:10:48.318454   39891 host.go:66] Checking if "multinode-893477-m02" exists ...
	I0729 11:10:48.318815   39891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:10:48.318855   39891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:10:48.334436   39891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0729 11:10:48.334824   39891 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:10:48.335290   39891 main.go:141] libmachine: Using API Version  1
	I0729 11:10:48.335312   39891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:10:48.335601   39891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:10:48.335785   39891 main.go:141] libmachine: (multinode-893477-m02) Calling .DriverName
	I0729 11:10:48.335983   39891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:10:48.336003   39891 main.go:141] libmachine: (multinode-893477-m02) Calling .GetSSHHostname
	I0729 11:10:48.338412   39891 main.go:141] libmachine: (multinode-893477-m02) DBG | domain multinode-893477-m02 has defined MAC address 52:54:00:68:96:94 in network mk-multinode-893477
	I0729 11:10:48.338827   39891 main.go:141] libmachine: (multinode-893477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:96:94", ip: ""} in network mk-multinode-893477: {Iface:virbr1 ExpiryTime:2024-07-29 12:08:58 +0000 UTC Type:0 Mac:52:54:00:68:96:94 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-893477-m02 Clientid:01:52:54:00:68:96:94}
	I0729 11:10:48.338854   39891 main.go:141] libmachine: (multinode-893477-m02) DBG | domain multinode-893477-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:68:96:94 in network mk-multinode-893477
	I0729 11:10:48.338988   39891 main.go:141] libmachine: (multinode-893477-m02) Calling .GetSSHPort
	I0729 11:10:48.339150   39891 main.go:141] libmachine: (multinode-893477-m02) Calling .GetSSHKeyPath
	I0729 11:10:48.339295   39891 main.go:141] libmachine: (multinode-893477-m02) Calling .GetSSHUsername
	I0729 11:10:48.339396   39891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19337-3845/.minikube/machines/multinode-893477-m02/id_rsa Username:docker}
	I0729 11:10:48.422015   39891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:10:48.435657   39891 status.go:257] multinode-893477-m02 status: &{Name:multinode-893477-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:10:48.435697   39891 status.go:255] checking status of multinode-893477-m03 ...
	I0729 11:10:48.435999   39891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:10:48.436036   39891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:10:48.452545   39891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
	I0729 11:10:48.452964   39891 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:10:48.453400   39891 main.go:141] libmachine: Using API Version  1
	I0729 11:10:48.453418   39891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:10:48.453739   39891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:10:48.453922   39891 main.go:141] libmachine: (multinode-893477-m03) Calling .GetState
	I0729 11:10:48.455616   39891 status.go:330] multinode-893477-m03 host status = "Stopped" (err=<nil>)
	I0729 11:10:48.455631   39891 status.go:343] host is not running, skipping remaining checks
	I0729 11:10:48.455639   39891 status.go:257] multinode-893477-m03 status: &{Name:multinode-893477-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-893477 node start m03 -v=7 --alsologtostderr: (39.684367284s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-893477 node delete m03: (1.937805549s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (179.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-893477 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 11:19:57.915533   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-893477 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m59.422415559s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-893477 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (179.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-893477
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-893477-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-893477-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (58.642057ms)

                                                
                                                
-- stdout --
	* [multinode-893477-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-893477-m02' is duplicated with machine name 'multinode-893477-m02' in profile 'multinode-893477'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-893477-m03 --driver=kvm2  --container-runtime=crio
E0729 11:23:03.510712   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-893477-m03 --driver=kvm2  --container-runtime=crio: (44.138768754s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-893477
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-893477: exit status 80 (208.872679ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-893477 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-893477-m03 already exists in multinode-893477-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-893477-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.21s)

                                                
                                    
x
+
TestScheduledStopUnix (111.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-409115 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-409115 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.867745771s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-409115 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-409115 -n scheduled-stop-409115
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-409115 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-409115 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-409115 -n scheduled-stop-409115
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-409115
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-409115 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0729 11:28:03.510877   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-409115
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-409115: exit status 7 (61.615447ms)

                                                
                                                
-- stdout --
	scheduled-stop-409115
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-409115 -n scheduled-stop-409115
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-409115 -n scheduled-stop-409115: exit status 7 (64.605054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-409115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-409115
--- PASS: TestScheduledStopUnix (111.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (118.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4175974495 start -p running-upgrade-342576 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4175974495 start -p running-upgrade-342576 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (53.263523022s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-342576 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-342576 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.767557847s)
helpers_test.go:175: Cleaning up "running-upgrade-342576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-342576
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-342576: (1.172976665s)
--- PASS: TestRunningBinaryUpgrade (118.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-184479 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-184479 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (106.119303ms)

                                                
                                                
-- stdout --
	* [false-184479] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:28:07.707855   47976 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:28:07.708008   47976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:28:07.708021   47976 out.go:304] Setting ErrFile to fd 2...
	I0729 11:28:07.708028   47976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:28:07.708335   47976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-3845/.minikube/bin
	I0729 11:28:07.709152   47976 out.go:298] Setting JSON to false
	I0729 11:28:07.710349   47976 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4234,"bootTime":1722248254,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:28:07.710440   47976 start.go:139] virtualization: kvm guest
	I0729 11:28:07.712657   47976 out.go:177] * [false-184479] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:28:07.714102   47976 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:28:07.714143   47976 notify.go:220] Checking for updates...
	I0729 11:28:07.717010   47976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:28:07.718523   47976 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	I0729 11:28:07.719826   47976 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	I0729 11:28:07.721013   47976 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:28:07.722263   47976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:28:07.724177   47976 config.go:182] Loaded profile config "force-systemd-flag-371697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:28:07.724306   47976 config.go:182] Loaded profile config "kubernetes-upgrade-302301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 11:28:07.724436   47976 config.go:182] Loaded profile config "offline-crio-290694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:28:07.724541   47976 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:28:07.763634   47976 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 11:28:07.764988   47976 start.go:297] selected driver: kvm2
	I0729 11:28:07.765008   47976 start.go:901] validating driver "kvm2" against <nil>
	I0729 11:28:07.765023   47976 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:28:07.767090   47976 out.go:177] 
	W0729 11:28:07.768541   47976 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0729 11:28:07.769914   47976 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-184479 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-184479" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-184479

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184479"

                                                
                                                
----------------------- debugLogs end: false-184479 [took: 2.569534309s] --------------------------------
helpers_test.go:175: Cleaning up "false-184479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-184479
--- PASS: TestNetworkPlugins/group/false (2.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1628036182 start -p stopped-upgrade-867440 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1628036182 start -p stopped-upgrade-867440 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m15.623606871s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1628036182 -p stopped-upgrade-867440 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1628036182 -p stopped-upgrade-867440 stop: (2.139831066s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-867440 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-867440 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.633916649s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-867440
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-941459 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-941459 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (65.52853ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-941459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-3845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-3845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-941459 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-941459 --driver=kvm2  --container-runtime=crio: (47.380357515s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-941459 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-941459 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-941459 --no-kubernetes --driver=kvm2  --container-runtime=crio: (5.771283241s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-941459 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-941459 status -o json: exit status 2 (226.40023ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-941459","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-941459
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-941459: (1.004767751s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-941459 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-941459 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.889363807s)
--- PASS: TestNoKubernetes/serial/Start (26.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-941459 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-941459 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.746725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E0729 11:32:46.560762   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (5.253596663s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.993381392s)
--- PASS: TestNoKubernetes/serial/ProfileList (19.25s)

                                                
                                    
x
+
TestPause/serial/Start (69.83s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-581851 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-581851 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m9.832820338s)
--- PASS: TestPause/serial/Start (69.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-941459
E0729 11:33:03.511101   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-941459: (1.383881746s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (51.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-941459 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-941459 --driver=kvm2  --container-runtime=crio: (51.576871496s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (51.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-941459 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-941459 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.753389ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m27.525157002s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (106.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m46.129825365s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (106.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (137.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0729 11:34:57.915529   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/addons-342031/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m17.467980932s)
--- PASS: TestNetworkPlugins/group/calico/Start (137.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (100.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m40.724977114s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (100.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-184479 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-184479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cjm5k" [407435e1-3e8d-47a8-83e6-4789375f9586] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cjm5k" [407435e1-3e8d-47a8-83e6-4789375f9586] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004027449s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-184479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (112.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m52.345848084s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (112.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6djm2" [601adada-5fc7-4b8b-98b6-8e2dd8baa7ed] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005356734s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-184479 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-184479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bwskl" [e10b21f3-646a-457e-b8db-732a89a2b817] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bwskl" [e10b21f3-646a-457e-b8db-732a89a2b817] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004088705s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-184479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (81.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m21.797143648s)
--- PASS: TestNetworkPlugins/group/flannel/Start (81.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9t8jn" [947b3a95-318b-4f87-a899-209f39713cec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005131241s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-184479 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-184479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-64wz4" [c9442c50-6859-4cc1-883b-253674028b0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-64wz4" [c9442c50-6859-4cc1-883b-253674028b0c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004716132s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-184479 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-184479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kdn76" [68518ac5-9d3d-45b6-866a-8c2dbb7397c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kdn76" [68518ac5-9d3d-45b6-866a-8c2dbb7397c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004437915s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-184479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-184479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-184479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m41.411856094s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-184479 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-184479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bl68k" [f2090c44-d4e5-4e90-90fd-93418e2df44d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bl68k" [f2090c44-d4e5-4e90-90fd-93418e2df44d] Running
E0729 11:38:03.511577   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004134178s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-184479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qpxxc" [9ed97a45-0e1d-446f-9e2f-bd5a5711badf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004864808s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-297799 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-297799 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m17.319392882s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-184479 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-184479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tz544" [f5b3714f-875a-464e-a0c2-70c91ff8571d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tz544" [f5b3714f-875a-464e-a0c2-70c91ff8571d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005198135s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-184479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (103.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-731235 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-731235 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m43.143501197s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (103.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-184479 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-184479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-89zgh" [a920161f-99b3-464d-ae9d-6b403ab1490c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-89zgh" [a920161f-99b3-464d-ae9d-6b403ab1490c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.00405045s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-184479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-184479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-297799 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b645ee37-5d45-4ac3-bb23-d4e27d1e4217] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b645ee37-5d45-4ac3-bb23-d4e27d1e4217] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.0053025s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-297799 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-754486 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-754486 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m35.685073721s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-297799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-297799 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-731235 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [93c13d9b-315c-4d72-b7c6-819908634370] Pending
E0729 11:40:37.311214   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:40:37.951783   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
helpers_test.go:344: "busybox" [93c13d9b-315c-4d72-b7c6-819908634370] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0729 11:40:39.232676   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
helpers_test.go:344: "busybox" [93c13d9b-315c-4d72-b7c6-819908634370] Running
E0729 11:40:41.792986   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004081898s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-731235 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-731235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0729 11:40:46.913516   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-731235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012264555s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-731235 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-754486 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c45434b7-1e0d-4546-a876-81ffc7351bb5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0729 11:41:30.738065   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/kindnet-184479/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c45434b7-1e0d-4546-a876-81ffc7351bb5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005113634s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-754486 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-754486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-754486 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (682.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-297799 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 11:42:26.669914   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:42:33.575315   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-297799 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (11m21.845262775s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-297799 -n no-preload-297799
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (682.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (564.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-731235 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 11:43:19.769807   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:19.775101   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:19.785464   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:19.805833   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:19.846183   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:19.926557   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:20.086941   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:20.407582   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:20.517184   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/auto-184479/client.crt: no such file or directory
E0729 11:43:21.047837   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:22.328083   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:24.888291   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:28.111919   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/calico-184479/client.crt: no such file or directory
E0729 11:43:30.009435   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
E0729 11:43:35.016596   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/custom-flannel-184479/client.crt: no such file or directory
E0729 11:43:38.569299   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-731235 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m23.889321564s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-731235 -n embed-certs-731235
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (564.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (534.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-754486 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 11:44:19.530791   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 11:44:23.690867   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:23.696182   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:23.706441   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:23.726781   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:23.767156   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:23.847484   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:24.007901   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:24.328517   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:24.969466   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:26.250041   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
E0729 11:44:28.810679   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-754486 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (8m53.864611511s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754486 -n default-k8s-diff-port-754486
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (534.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-188043 --alsologtostderr -v=3
E0729 11:44:33.931147   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-188043 --alsologtostderr -v=3: (4.276463902s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188043 -n old-k8s-version-188043: exit status 7 (64.232731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-188043 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-485099 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 12:07:57.606994   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/enable-default-cni-184479/client.crt: no such file or directory
E0729 12:08:03.511432   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/functional-503222/client.crt: no such file or directory
E0729 12:08:19.770291   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/flannel-184479/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-485099 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (47.631326157s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-485099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-485099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.199431986s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-485099 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-485099 --alsologtostderr -v=3: (7.378949947s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-485099 -n newest-cni-485099
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-485099 -n newest-cni-485099: exit status 7 (62.475375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-485099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-485099 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-485099 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (36.873369585s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-485099 -n newest-cni-485099
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-485099 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-485099 --alsologtostderr -v=1
E0729 12:09:23.690735   11064 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-3845/.minikube/profiles/bridge-184479/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-485099 -n newest-cni-485099
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-485099 -n newest-cni-485099: exit status 2 (242.520337ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-485099 -n newest-cni-485099
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-485099 -n newest-cni-485099: exit status 2 (241.344284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-485099 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-485099 -n newest-cni-485099
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-485099 -n newest-cni-485099
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
261 TestNetworkPlugins/group/kubenet 2.76
269 TestNetworkPlugins/group/cilium 3.06
290 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-184479 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-184479" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-184479

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184479"

                                                
                                                
----------------------- debugLogs end: kubenet-184479 [took: 2.621952362s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-184479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-184479
--- SKIP: TestNetworkPlugins/group/kubenet (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-184479 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-184479" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-184479

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-184479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184479"

                                                
                                                
----------------------- debugLogs end: cilium-184479 [took: 2.925526403s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-184479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-184479
--- SKIP: TestNetworkPlugins/group/cilium (3.06s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-574387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-574387
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard